Some Problems Concerning Epistemic Justification
Jamie Carlin Watson here articulates an interesting problem concerning epistemic justification. He writes:
“But the idea that justification is a matter of having good reasons faces a serious obstacle. Normally, when we give reasons for a belief, we cite other beliefs. Take, for example, the proposition, “The cat is on the mat.” If you believe it and are asked why, you might offer the following beliefs to support it:
1. I see that the cat is on the mat.
2. Seeing that X implies that X.
Together, these seem to constitute a good reason for believing the proposition:
3. The cat is on the mat.
But does this mean that proposition 3 is epistemically justified for you? Even if the combination of propositions 1 and 2 counts as a good reason to believe 3, proposition 3 is not justified unless both 1 and 2 are also justified. Do we have good reasons for believing 1 and 2? If not, then according to the good reasons account of justification, propositions 1 and 2 are unjustified, which means that 3 is unjustified. If we do have good reasons for believing 1 and 2, do we have good reasons for believing those propositions? How long does our chain of good reasons have to be before even one belief is justified?”
A critical question to consider is this: what is the nature of epistemic justification? I have suggested in my published work (here) that there are two kinds of justification: loose and precise. (I don’t mean to suggest that these are the only two kinds.) The former is fallible, and roughly a matter of a proposition’s being more likely true than not, given the relevant evidence. For example, if q supports p and the support relation between q and p is such that q’s support of p makes p probable to the degree of .7, then p is more likely true than false, given q. However, p might turn out to be false, thereby making p loosely and fallibly justified. The latter kind of justification concerns being epistemically certain and thus infallible regarding a proposition; i.e., given the pertinent reasons for believing that p and the degree of support they provide for p, one cannot be wrong that p, assuming one has access to those pertinent reasons.
Now, with respect to precise justification (PJ), arguably, (1) is not justified. One cannot be epistemically certain that one sees the cat on the mat, since one might be wrong that one is seeing what one takes oneself to be seeing. Perhaps what appears to be a cat is in fact a dog, or a stuffed animal, or the content of a hallucination. And since (1) is not justified, neither is (3).
But we can modify the example to obtain PJ. Consider this:
1*. I am being appeared to cat-on-the-matly.
2*. Being appeared to cat-on-the-matly entails that there is a cat-on-the-mat experience.
Together, these seem to constitute a good reason for believing the proposition:
3. There is a cat-on-the-mat experience.
Arguably, one can be epistemically certain of (1*) and (2*) and hence of (3*). But (3*) is quite different from (3).
What about loose justification (LJ)? Plausibly, (1) and (2) are loosely (and thus fallibly) justified in the sense of being more probably true that not, given the evidence. Therefore, we have reason to believe that (3) is also loosely justified.
But does (1) require independent reasons before one can believe it with LJ? It depends on one’s assumptions. As Watson notes, if one assumes that all epistemic justification requires inferring beliefs from one or more other beliefs, then there is a problem he calls the dilemma of inferential justification (DIJ). On one hand, if there are no good (independent) reasons to believe that (1), then (1) is unjustified and thus (3) is unjustified. On the other hand, if there is a good reason to believe that (1), say proposition (1a), then either (1a) is unjustified or we need another belief, (1b), to justify (1a). This inquiry seems to set up an infinite regress, and therefore (1) is unjustified; hence (3) is unjustified.
One can block the DIJ by rejecting the assumption that all epistemic justification requires inferring beliefs from one or more other beliefs. How might one do this? My studied yet provisional move is to claim that some beliefs are properly basic and thus self-evident (or otherwise obviously true) or supported by non-belief states, such as direct and indubitable experience. For example, (1*) seems obviously and indeed infallibly true based on direct experience, assuming that we have infallible access to (at least some of) our own mental states with respect to how things seem to us. This approach is called foundationalism. I grant that there are problems with foundationalism, but I cannot address them now.
(Note to self: “reasons for and against foundationalism” is a good topic for another post.)