'Beyond experiments' Perspectives on Psychological Science (forthcoming) (co-authored with Ed Diener, Mike Zyphur, and Steve West). (This is a final draft.)
Synopsis: Experiments are overvalued and overused in psychology. They work best only when combined with other methods.
'Economic theory and empirical science' forthcoming in Conrad Heilmann and Julian Reiss (eds), Routledge Handbook of Philosophy of Economics. (This is a final draft.)
Synopsis: Economists over-invest in orthodox theory. Arguments for orthodoxy, and against heterodoxy, do not hold up.
'Back to the big picture' Journal of Economic Methodology (forthcoming) (co-authored with Anna Alexandrova and Jack Wright). (This is a late version, with an abstract.)
Synopsis: Methodologists should re-focus on big-picture issues that concern the economics profession as a whole.
'Prediction, history, and political science’ in Oxford Handbook of Philosophy of Political Science (Oxford forthcoming, Harold Kincaid and Jeroen van Bouwel, eds). (This is a late draft.)
Synopsis: In political science, usually we should insist on either prediction or contextual history. Explanations are only narrow-scope.
'Pre-emption cases may support, not undermine, the counterfactual theory of causation' Synthese, 2021, 198.1:537-555.
Synopsis: Intuitions in pre-emption cases vary. The most promising overall account of them favours a simple counterfactual theory.
‘Big data and prediction: four case studies’ Studies in the History and Philosophy of Science, 2020, 86: 96-104.
Synopsis: In four important cases, big data methods do not improve predictions much, nor are they likely to in the future.
'Prediction versus accommodation in economics' Journal of Economic Methodology, 2019, 26.1: 59-69. (This is a late draft.)
Synopsis: Prediction in economics is often essential but this is underappreciated.
'Free will is not a testable hypothesis' Erkenntnis, 2019, 84.3: 617-631. (This is a late draft.)
Synopsis: Whether or not we have free will cannot feasibly be tested by science, either now or in the future.
'The efficiency question in economics' Philosophy of Science, 2018, 85: 1140-1151. (This is a late draft.)
Synopsis: Resources in economics should be re-directed from theory to empirical work. Philosophers have unduly ignored this issue.
Conceived this way: Innateness defended Philosophers' Imprint, 2018, 18.18: 1-16 (co-authored with Gualtiero Piccinini).
Synopsis: Innateness is a useful umbrella concept, best defined in terms of recent causal theory.
'When are purely predictive models best?' Disputatio, 2017, 9.47: 631-656.
Synopsis: If forced to choose, prediction trumps explanation, and so purely predictive models trump causal ones.
'A Dilemma for the Doomsday Argument' Ratio, 2016, 29.3: 268-282. (This is a late draft.)
Synopsis: A new case suggests significant constraints on when the Doomsday Argument can be of interest.
'Opinion polling and election predictions' Philosophy of Science, 2015, 82: 1260-1271. (This is a late draft.)
Synopsis: A rare empirical success story for social science suggests we should de-emphasize mechanisms and metaphysics.
'The Prisoner's Dilemma doesn't explain much', chapter 4 in The Prisoner's Dilemma (Cambridge 2015, Martin Peterson, ed), pp64-84 (co-authored with Anna Alexandrova). (This is a late draft.)
Synopsis: The Prisoner's Dilemma game has been greatly overrated and over-studied: it explains almost nothing.
'Harm and causation' Utilitas, 2015, 27.2: 147-164. (This is a late draft.)
Synopsis: Importing recent work from the causation literature leads to a better definition of harm.
'It's just a feeling: why economic models do not explain' Journal of Economic Methodology, 2013, 20.3: 262-267 (co-authored with Anna Alexandrova). (This is a late draft.)
Synopsis: Formal economic models are not explanatory, contrary to widespread assumption and mistaken intuition.
Degree of explanation Synthese, 2013, 190.15: 3087-3105
Synopsis: Develops a formal definition of degree of explanation. (Considerable intricacy.)
Verisimilitude: a causal approach Synthese, 2013, 190.9: 1471-1488
Synopsis: Verisimilitude cannot be well defined for theories as a whole. Develops a localised definition instead.
'Partial explanations in social science', chapter 7 in Oxford Handbook of Philosophy of Social Science (Oxford 2012, Harold Kincaid, ed), pp130-153. (This is a late draft.)
Synopsis: Develops a definition of partial explanation, and relates it to several standard methods in social science.
'Genetic traits and causal explanation', chapter 4 in Philosophy of Behavioral Biology (Springer: Boston Studies in Philosophy of Science 282, 2012, Kathryn Plaisance and Thomas Reydon, eds), pp65-82. (This is a late draft.)
Synopsis: Develops a relational definition of genetic traits and dispositions. (Considerable intricacy.)
'How necessary are randomized controlled trials?', in Ronald Munson, Intervention and Reflection: Basic Issues in Medical Ethics (9th edition, Thomson Wadsworth, 2012), pp187-191. (This is a late draft.)
Synopsis: Randomized controlled trials are often unnecessary in medical research; alternatives are sometimes better.
Natural-born determinists: a new defense of causation as probability-raising Philosophical Studies, 2010, 150.1: 1-20
Synopsis: Solves problem cases for the probability-raising definition of causation by incorporating psychology.
Walsh on causes and evolution Philosophy of Science, 2010, 77.3: 457-467
Synopsis: Defends a causalist view of natural selection against some criticisms by Walsh.
On Lewis, Schaffer and the non-reductive evaluation of counterfactuals Theoria, 2009, 75.4: 336-343
Synopsis: A new example suggests Schaffer's amendment of Lewis should itself be amended by causal modelling methods.
Is actual difference making actually different? Journal of Philosophy, 2009, 106.11: 629-634
Synopsis: Waters's actual difference maker analysis does not overcome Mill's causal parity thesis.
'Progress in economics', chapter 11 in Oxford Handbook of Philosophy of Economics (Oxford 2009, Don Ross and Harold Kincaid, eds), pp306-337 (co-authored with Anna Alexandrova). (This is a late draft.)
Synopsis: Progress in economics does not consist in theory development but rather in improved local interventions.
Causation and contrast classes Philosophical Studies, 2008, 39.1: 111-123
Synopsis: Causation is a contrastive relation in both cause and effect slots. (Considerable intricacy.)
Weighted explanations in history Philosophy of the Social Sciences, 2008, 38.1: 76-96
Synopsis: Application of the causation literature shows that many common assumptions in historical practice are mistaken.
Can ANOVA measure causal strength? Quarterly Review of Biology, 2008, 83.1: 47-55
Synopsis: Contrary to common practice, ANOVA is ill suited to measuring a factor's causal importance.
Causal efficacy and the analysis of variance Biology and Philosophy, 2006, 21.2: 253-276
Synopsis: Unifies the singleton and population senses of causal efficacy. ANOVA is ill suited to elucidating even the latter.
Pearson’s wrong turning: against statistical measures of causal efficacy Philosophy of Science, 2005, 72.5: 900-912
Synopsis: Standard statistical measures of strength of association are ill suited to a causal interpretation.
Comparing apples with oranges Analysis, 2005, 65.1: 12-18
Synopsis: For their contributions to be comparable, it's neither necessary nor sufficient that two causes be commensurable.
'Bad luck or the ref's fault?', in Soccer and Philosophy (Open Court, 2010, Ted Richards, ed), pp319-326. (This is a late draft.)
'The Irrational Game: why there’s no perfect system', in Poker and Philosophy (Open Court, 2006, Eric Bronson, ed), pp105-115. (This is a late draft.)
Review of ‘The Scientific Study of Society’ by Max Steuer, Economics and Philosophy 2004, 20.2: 375-381. (This is a late draft.)
In more detail
forthcoming in Perspectives on Psychological Science (co-authored with Ed Diener, Mike Zyphur, and Steve West).
It is often claimed that only experiments can support strong causal inferences, and therefore they should be privileged in the behavioral sciences. We disagree. Overvaluing experiments results in their overuse both by researchers and decision-makers, and in an underappreciation of their shortcomings. Neglect of other methods often follows. Experiments can suggest whether X causes Y in a specific experimental setting; however, they often fail to elucidate either the mechanisms responsible for an effect, or the strength of an effect in everyday natural settings. In this paper, we consider two overarching issues. First, experiments have important limitations. We highlight problems with: external, construct, statistical conclusion, and internal validity; replicability; and with conceptual issues associated with simple X-causes-Y thinking. Second, quasi-experimental and non-experimental methods are absolutely essential. As well as themselves estimating causal effects, these other methods can provide information and understanding that goes beyond that provided by experiments. A research program progresses best when experiments are not treated as privileged but instead are combined with these other methods.
Economic theory and empirical science
forthcoming in Conrad Heilmann and Julian Reiss (eds), Routledge Handbook of Philosophy of Economics
Economics over-invests in orthodox rational choice models. The problem with them is not that they are idealized but that they lack empirical success, and when empirical success is achieved their contribution to it is typically only heuristic. As a result, many of the alleged benefits of orthodox models do not hold up: they do not explain, they do not provide understanding in terms of agent rational choices, and they do not generalize across cases. Their presumed advantage over heterodox models and methods melts away. A pressing issue then becomes the discipline-wide balance of work across different methods. The recent empirical turn in economics is an example of re-balancing, and is to be welcomed.
Back to the big picture
forthcoming in Journal of Economic Methodology (co-authored with Anna Alexandrova and Jack Wright)
We distinguish between two strategies in methodology of economics. The big picture strategy, dominant in the twentieth century, ascribed to economics a unified method and evaluated this method against a single criterion of ‘science’. In the last thirty years a second strategy gained prominence: fine-grained studies of how some specific technique common in economics can achieve one or more epistemic goal. We argue that recent developments in philosophy of science and in economics warrant a return to big-picture – but now reinvented. It should focus on a new question, already intensely debated within the profession: is the organization of economics healthy and appropriate?
Prediction, history and political science
forthcoming in Oxford Handbook of Philosophy of Political Science (Oxford forthcoming, Harold Kincaid and Jeroen van Bouwel, eds).
Political science usually requires either prediction or contextual historical work to succeed. Because of the difficulty of prediction, the main focus should often be contextual historical work. Both of these methods favor narrow-scope explanations. I illustrate, via an example, the role that this still leaves for theory. I conclude by assessing the scope for political science to offer policy advice. All of this tells against several practices that are widespread in the discipline.
Pre-emption cases may support, not undermine, the counterfactual theory of causation
Synthese, 2021, 198.1:537-555
Pre-emption cases have been taken by almost everyone to imply the unviability of the simple counterfactual theory of causation. Yet there is ample motivation from scientific practice to endorse a simple version of the theory if we can. There is a way in which a simple counterfactual theory, at least if understood contrastively, can be supported even while acknowledging that intuition goes firmly against it in pre-emption cases – or rather, only in some of those cases. For I present several new pre-emption cases in which causal intuition does not go against the counterfactual theory, a fact that has been verified experimentally. I suggest an account of framing effects that can square the circle. Crucially, this account offers hope of theoretical salvation – but only to the counterfactual theory of causation, not to others. Again, there is (admittedly only preliminary) experimental support for this account.
Big data and prediction: four case studies
Studies in the History and Philosophy of Science, 2020, 86: 96-104
Has the rise of data-intensive science, or ‘big data’, revolutionized our ability to predict? Does it imply a new priority for prediction over causal understanding, and a diminished role for theory and human experts? I examine four important cases where prediction is desirable: political elections, the weather, GDP, and the results of interventions suggested by economic experiments. These case studies extend existing philosophical analysis. They also suggest caution. Although big data methods are indeed very useful sometimes, in this paper’s cases they improve predictions either limitedly or not at all, and their prospects of doing so in the future are limited too.
Prediction versus accommodation in economics
Journal of Economic Methodology, 2019, 26.1: 59-69
Should we insist on prediction, i.e. on correctly forecasting the future? Or can we rest content with accommodation, i.e. empirical success only with respect to the past? I apply general considerations about this issue to the case of economics. In particular, I examine various ways in which mere accommodation can be sufficient, in order to see whether those ways apply to economics. Two conclusions result. First, an entanglement thesis: the need for prediction is entangled with the methodological role of orthodox economic theory. Second, a conditional predictivism: if we are not committed to orthodox economic theory, then (often) we should demand prediction rather than accommodation – against most current practice.
Free will is not a testable hypothesis
Erkenntnis, 2019, 84.3: 617-631
Much recent work in neuroscience aims to shed light on whether we have free will. Can it? Can any science? To answer, we need to disentangle different notions of free will, and clarify what we mean by ‘empirical’ and ‘testable’. That done, my main conclusion is, duly interpreted: that free will is not a testable hypothesis. In particular, it is neither verifiable nor falsifiable by empirical evidence. The arguments for this are not a priori but rather are based on a posteriori consideration of the relevant neuroscientific investigations, as well as on standard philosophy of science work on the notion of testability. The conclusion is arguably double-edged: on the one hand, free will is shielded from scientific refutation. On the other hand, it forfeits being of any interest to science.
The efficiency question in economics
Philosophy of Science, 2018, 85: 1140-1151
Much philosophical attention has been devoted to whether economic models explain, and more generally to how scientific models represent. Yet there is an issue more practically important to economics than either of these, which I label the efficiency question: regardless of how exactly models represent, or of whether their role is explanatory or something else, is current modeling practice an efficient way to achieve these goals – or should research efforts be redirected? In addition to showing how the efficiency question has been relatively neglected, I give two examples of the kind of analysis it requires.
Conceived this way: Innateness defended
Philosophers' Imprint, 2018, 18.18: 1-16 (co-authored with Gualtiero Piccinini)
We propose a novel account of the distinction between innate and acquired biological traits: biological traits are innate to the degree that they are caused by factors intrinsic to the organism at the time of its origin; they are acquired to the degree that they are caused by factors extrinsic to the organism. This account borrows from recent work on causation in order to make rigorous the notion of quantitative contributions to traits by different factors in development. We avoid the pitfalls of previous accounts and argue that the distinction between innate and acquired traits is scientifically useful. We therefore address not only previous accounts of innateness but also skeptics about any account. The two are linked, in that a better account of innateness also enables us better to address the skeptics.
When are purely predictive models best?
Disputatio, 2017, 9.47: 631-656
Can purely predictive models be useful in investigating causal systems? I argue “yes”. Moreover, in many cases not only are they useful, they are essential. The alternative is to stick to models or mechanisms drawn from well-understood theory. But a necessary condition for explanation is empirical success, and in many cases in social and field sciences such success can only be achieved by purely predictive models, not by ones drawn from theory. Alas, the attempt to use theory to achieve explanation or insight without empirical success therefore fails, leaving us with the worst of both worlds – neither prediction nor explanation. Best go with empirical success by any means necessary. I support these methodological claims via case studies of two impressive feats of predictive modelling: opinion polling of political elections, and weather forecasting.
A Dilemma for the Doomsday Argument
Ratio, 2016, 29.3: 268-282
I present a new case in which the Doomsday Argument (‘DA’) runs afoul of epistemic intuition much more strongly than before. This leads to a dilemma: in the new case either DA is committed to unacceptable counterintuitiveness and belief in miracles, or else it is irrelevant. I then explore under what conditions DA can escape this dilemma. The discussion turns on several issues that have not been much emphasised in previous work on DA: a concern that I label trumping; the degree of uncertainty about relevant probability estimates; and the exact sequence in which we integrate DA and empirical concerns. I conclude that only given a particular configuration of these factors might DA still be of interest.
Opinion polling and election predictions
Philosophy of Science, 2015, 82: 1260-1271
Election prediction by means of opinion polling is a rare empirical success story for social science, but one not previously considered by philosophers. I examine the details of a prominent case, namely the 2012 US presidential election, and draw two lessons of more general interest:
1) Methodology over metaphysics. Traditional metaphysical criteria were not a useful guide to whether successful prediction would be possible; instead, the crucial thing was selecting an effective methodology.
2) Which methodology? Success required sophisticated use of case-specific evidence from opinion polling. The pursuit of explanations via general theory or causal mechanisms, by contrast, turned out to be precisely the wrong path – contrary to much recent philosophy of social science.
Harm and causation
Utilitas , 2015, 27.2: 147-164
I propose an analysis of harm in terms of causation: harm is when a subject is caused to be worse off. The pay-off from this lies in the details. In particular, importing influential recent work from the causation literature yields a contrastive-counterfactual account. This enables us to incorporate harm’s multiple senses into a unified scheme, and to provide that scheme with theoretical ballast. It also enables us to respond effectively to previous criticisms of counterfactual accounts, as well as to sharpen criticisms of rival views.
The Prisoner's Dilemma doesn't explain much
chapter 4 in The Prisoner's Dilemma (Cambridge 2015, Martin Peterson, ed), pp64-84 (co-authored with Anna Alexandrova)
We make the case that the Prisoner’s Dilemma, notwithstanding its fame and the quantity of intellectual resources devoted to it, has largely failed to explain any phenomena of social scientific or biological interest. In the heart of the paper we examine in detail a famous purported example of Prisoner’s Dilemma empirical success, namely Axelrod’s analysis of WWI trench warfare, and argue that this success is greatly overstated. Further, we explain why this negative verdict is likely true generally and not just in our case study. We also address some possible defenses of the Prisoner’s Dilemma
It's just a feeling: why economic models do not explain
Journal of Economic Methodology (online 2013) 20.3: 262-267
(co-authored with Anna Alexandrova)
Julian Reiss (2012) poses the following trilemma:
1) Economic models are false
2) Economic models are nevertheless explanatory
3) Only true accounts explain
We discuss the trilemma’s second horn. In particular, we deny that economic models are explanatory. We begin by reiterating briefly why economic models are indeed not explanatory, then give reasons why intuitions to the contrary should be distrusted, before exploring why such mistaken intuitions might arise in the first place.
Degree of explanation
Synthese, 2013, 190.15: 3087-3105
Partial explanations are everywhere. That is, explanations citing causes that explain some but not all of an effect are ubiquitous across science, and these in turn rely on the notion of degree of explanation. I argue that current accounts are seriously deficient. In particular, they do not incorporate adequately the way in which a cause’s explanatory importance varies with choice of explanandum. Using influential recent contrastive theories, I develop quantitative definitions that remedy this lacuna, and relate it to existing measures of degree of causation. Among other things, this reveals the precise role here of chance, as well as bearing on the relation between causal explanation and causation itself.
Verisimilitude: a causal approach
Synthese (2013) 190.9: 1471-1488
I present a new definition of verisimilitude, framed in terms of causes. Roughly speaking, according to it a scientific model is approximately true if it captures accurately the strengths of the causes present in any given situation. Against much of the literature, I argue that any satisfactory account of verisimilitude must inevitably restrict its judgments to context-specific models rather than general theories. We may still endorse - and only need - a relativized notion of scientific progress, understood now not as global advance but rather as the mastering of particular problems. This also sheds new light on longstanding difficulties surrounding language-dependence, and models committed to false ontologies.
Partial explanations in social science
chapter 7 in Oxford Handbook of Philosophy of Social Science (Oxford 2012, Harold Kincaid, ed), pp130-153
Comparing different causes’ importance, and apportioning responsibility between them, requires making good sense of the notion of partial explanation, that is, of degree of explanation. How much is this subjective, how much objective? If the causes in question are probabilistic, how much is the outcome due to them and how much to simple chance? I formulate the notion of degree of causation, or effect size, relating it to influential recent work in the literature on causation. I examine to what extent mainstream social science methods--both quantitative and qualitative--succeed in establishing effect sizes so understood. The answer turns out to be, roughly: only to some extent. Next, the standard understanding of effect size, even though widespread, still has several underappreciated consequences. I detail some of those. Finally, I discuss the separate issue of explanandum-dependence, which is essential to assessing any cause’s explanatory importance and yet which has been comparatively neglected.
Genetic traits and causal explanation
chapter 4 in Philosophy of Behavioral Biology (Springer: Boston Studies in Philosophy of Science 282, 2012, Kathryn Plaisance and Thomas Reydon, eds), pp65-82
I use a contrastive theory of causal explanation to analyze the notion of a genetic trait. The resulting definition is relational, an implication of which is that no trait is genetic always and everywhere. Rather, every trait may be either genetic or non-genetic, depending on explanatory context. I also outline some other advantages of connecting the debate to the wider causation literature, including how that yields us an account of the distinction between genetic traits and genetic dispositions.
Natural-born determinists: a new defense of causation as probability-raising
Philosophical Studies (2010) 150.1: 1-20.
A definition of causation as probability-raising is threatened by two kinds of counterexample: first, when a cause lowers the probability of its effect; and second, when the probability of an effect is raised by a non-cause. I present an account that deals successfully with problem cases of both these kinds. In doing so, I also explore some implications of incorporating into the metaphysical investigation considerations of causal psychology. In particular, if we interpret the formal account as a theory of causal judgment rather than of causation itself, that enables us indirectly to defend a slightly different, and more desirable, metaphysical account than otherwise. The psychological detour thus pays metaphysical dividends.
Walsh on causes and evolution
Philosophy of Science (2010) 77.3: 457-467
Denis Walsh (2007) wrote a striking defense in Philosophy of Science of the statisticalist (i.e. non-causalist) position regarding the forces of evolution. I defend the causalist view against his objections. I argue that the heart of the issue lies in the nature of non-additive causation. Detailed consideration of that turns out to defuse Walsh’s ‘description-dependence’ critique of causalism. Nevertheless, the critique does suggest a basis for reconciliation between the two competing views
On Lewis, Schaffer and the non-reductive evaluation of counterfactuals
Theoria (2009) 75.4: 336-343
In a 2004 Analysis article, Jonathan Schaffer proposes an ingenious amendment to David Lewis’s semantics for counterfactuals. This amendment explicitly invokes the notion of causal independence, thus giving up Lewis’s ambitions for a reductive counterfactual account of causation. But in return, it rescues Lewis’s semantics from extant counterexamples. I present a new counterexample that defeats even Schaffer’s amendment. Further, I argue that a better approach would be to follow the causal modelling literature and evaluate counterfactuals via an explicit postulated causal structure. This alternative approach easily resolves the new counterexample, as well as all the previous ones. Up to now, its perceived drawback relative to Lewis’s scheme has been its non-reductiveness. But since the same drawback applies equally to Schaffer’s amended scheme, this becomes no longer a point of comparative disadvantage.
Is actual difference making actually different?
Journal of Philosophy (2009) 106.11: 629-634.
This paper responds to Kenneth Waters’s recent account of actual difference making. Among other things, I argue that although Waters is right that researchers may sometimes be justified in focusing on genes rather than other causes of phenotypic traits, he is wrong that the apparatus of actual difference makers overcomes the traditional causal parity thesis.
Progress in economics
chapter 11 in Oxford Handbook of Philosophy of Economics (Oxford 2009, Don Ross and Harold Kincaid, eds), pp306-337
(co-authored with Anna Alexandrova)
The 1994 US spectrum auction is now a paradigmatic case of the successful use of microeconomic theory for policy-making. We use a detailed analysis of it to review standard accounts in philosophy of science of how idealized models are connected to messy reality. We show that in order to understand what made the design of the spectrum auction successful, a new such account is required, and we present it here. Of especial interest is the light this sheds on the issue of progress in economics. In particular, it enables us to get clear on exactly what has been progressing, and on exactly what theory has – and has not – contributed to that. This in turn has important implications for just what it is about economic theory that we should value.
Causation and contrast classes
Philosophical Studies (2008) 39.1: 111-123
I argue that causation is a contrastive relation: c-rather-than-C* causes e-rather-than-E*, where C* and E* are contrast classes associated respectively with actual events c and e. I explain why this is an improvement on the traditional binary view, and develop a detailed definition. It turns out that causation is only well defined in ‘uniform’ cases, where either all or none of the members of C* are related appropriately to members of E*.
Weighted explanations in history
Philosophy of the Social Sciences (2008) 38.1: 76-96
Weighted explanations, whereby some causes are deemed more or less important than others, are ubiquitous in historical studies and indeed everyday life. But it turns out that furnishing a good account of them is a surprisingly delicate task, and one so far treated either unsatisfactorily or not at all in the explanation and philosophy of history literatures. As a result, it is still unclear exactly what a historian is claiming when offering a weighted explanation, and also unclear what kinds of evidence are relevant to assessing such claims. Drawing from influential recent work on causation and causal explanation, I develop a new definition of causal-explanatory strength. This yields a principled way to incorporate pragmatic aspects of explanation, and makes clear exactly which aspects of explanatory weighting are subjective and which objective. One payoff is that many widespread claims and assumptions regarding weighted explanations are now revealed, surprisingly, to be either false or confused.
Can ANOVA measure causal strength?
Quarterly Review of Biology (2008) 83.1: 47-55
The statistical technique of analysis of variance is often used by biologists as a measure of causal factors’ relative strength or importance. I argue that it is a tool ill suited to this purpose, on several grounds. I suggest a superior alternative, and outline some implications. I finish with a diagnosis of the source of error – an unwitting inheritance of bad philosophy that now requires the remedy of better philosophy.
Causal efficacy and the analysis of variance
Biology and Philosophy (2006) 21.2: 253-276
The causal impact of genes and environment on any one biological trait are inextricably entangled, and consequently it is widely accepted that it makes no sense in singleton cases to privilege either factor for particular credit. On the other hand, at a population level it may well be the case that one of the factors is reponsible for more variation than the other. Standard methodological practice in biology uses the statistical technique of analysis of variance to measure this latter kind of causal efficacy. In this paper, I argue that:
1) analysis of variance is in fact badly suited to this role; and
2) a superior alternative definition is available that readily reconciles both the entangled-singleton and the population-variation senses of causal efficacy.
Pearson’s wrong turning: against statistical measures of causal efficacy
Philosophy of Science (2005) 72.5: 900-912
Standard statistical measures of strength of association, although pioneered by Pearson deliberately to be acausal, nowadays are routinely used to measure causal efficacy. But their acausal origins have left them ill suited to this latter purpose. I distinguish between two different conceptions of causal efficacy, and argue that:
1) Both conceptions can be useful
2) The statistical measures only attempt to capture the first of them
3) They are not fully successful even at this
4) An alternative definition based more squarely on causal thinking not only captures the second conception, it can also capture the first one better too.
Comparing apples with oranges
Analysis (2005) 65.1: 12-18
'If two men lay bricks to build a wall, we may quite fairly measure their contributions by counting the number laid by each; but if one mixes the mortar and the other lays the bricks, it would be absurd to measure their relative quantitative contributions by measuring the volumes of bricks and of mortar' (Richard Lewontin). Thus: 'For it to make sense to ask what (or how much) a cause contributes to an effect, the various causes must be commensurable in the way they produce their effects' (Elliott Sober). These claims sound reasonable but I show on the contrary that, for their contributions to be comparable, it is neither necessary nor sufficient that two causes also be commensurable. Rather, in a sense that I discuss, what really matters is that they be separable.
(If done well, I think popular articles can be admirable ways of engaging a wider public with philosophy, in much the same way as good teaching can be. Alas, they are not always done well. But I hope in mine to use familiar contexts to introduce several metaphysical issues without distortion, and without tediously claiming more for philosophy than it actually delivers.)
Bad luck or the ref's fault?
in Ted Richards (ed), Soccer and Philosophy (Open Court, 2010), pp319-326
I discuss classic issues surrounding luck, determinism and probability in the context of the penalty shoot-outs used in football’s World Cup. Can it ever make objective sense to blame an outcome on bad luck? I go on to discuss whether we can legitimately pin the blame on any one factor at all, such as a referee. This takes us into issues surrounding the apportioning of causal responsibility.
The Irrational Game: why there’s no perfect system
in Eric Bronson (ed), Poker and Philosophy (Open Court, 2006), pp105-115
I use poker as a convenient illustration of probability, determinism and counterfactuals. More originally, I also discuss the roles of rationality versus psychological hunches, and explain why even in principle game theory cannot provide us the panacea of a perfect winning srategy.
(N.B. The document I link to here is slightly longer than the abbreviated version that appears in the book, and also differs in a few other minor details.)