Human cognition and behavior heavily relies on the notion that evidence (data, premises) can affect the credibility of hypotheses (theories, conclusions). This general idea seems to underlie sound and effective inferential practices in all sorts of domains, from everyday reasoning up to the frontiers of science. Yet it is also clear that, even with extensive and truthful evidence available, drawing a mistaken conclusion is more than a mere possibility. For painfully concrete examples, one only has to consider missed medical diagnoses (see Winters et al. 2012) or judicial errors (see Liebman et al. 2000). The Scottish philosopher David Hume (1711–1776) is usually credited for having disclosed the theoretical roots of these considerations in a particularly transparent way (although, arguably, Hume’s line of thought cuts deeper than this: see Howson 2000; also see Lange 2011 and Varzi 2008). In most cases of interest, Hume pointed out, many alternative candidate hypotheses remain logically compatible with all the relevant information at one’s disposal, so that none of the former can be singled out by the latter with full certainty. Thus, under usual circumstances, reasoning from evidence is necessarily fallible.
This fundamental insight has been the source of a lasting theoretical challenge: if amenable to analysis, the role of evidence as supporting (or infirming) hypotheses has to be grasped by more nuanced tools than plain logical entailment. As emphasized in a joke attributed to American philosopher Morris Raphael Cohen (1880–1947), logic texts had to be divided in two parts: in the first part, on deductive logic, unwarranted forms of inference (deductive fallacies) are exposed; in the second part, on inductive logic, they are endorsed (see Meehl 1990, 110). In contemporary philosophy, confirmation theory can be roughly described as the area where efforts have been made to take up the challenge of defining plausible models of non-deductive reasoning. Its central technical term—confirmation—has often been used more or less interchangeably with “evidential support”, “inductive strength”, and the like. Here we will generally comply with this liberal usage, although more subtle conceptual and terminological distinctions could be usefully drawn.
Confirmation theory has proven a rather difficult endeavour. In principle, it would aim at providing understanding and guidance for tasks such as diagnosis, prediction, and learning in virtually any area of inquiry. Yet popular accounts of confirmation have often been taken to run into troubles even when faced with toy philosophical examples. Be that as it may, there is at least one real-world kind of activity which has remained a prevalent target and benchmark, i.e., scientific reasoning, and especially key episodes from the history of modern and contemporary natural science. The motivation for this is easily figured out. Mature sciences seem to have been uniquely effective in relying on observed evidence to establish extremely general, powerful and sophisticated theories. Indeed, being capable of receiving genuine support from empirical evidence is itself a very distinctive trait of scientific hypotheses as compared to other kinds of statements. A philosophical characterization of what science is would then seem to require an understanding of the logic of confirmation. And so, traditionally, confirmation theory has come to be a central concern of philosophers of science.