Are You a Brain in a Vat?
Consider the following triad. If any two of the propositions are true, the third must be false, and yet each proposition is defensible.
One cannot know that one is not a brain-in-a-vat (BIV).
If one cannot know that one is not a BIV, then one’s knowledge is mostly limited to basic a priori propositions of mathematics and logic, self-evident matters, and the like.
One’s knowledge is not mostly limited to basic a priori propositions of mathematics and logic, self-evident matters, and the like.
Here are some reasons to support each proposition. (1) is true on the assumption that knowledge requires epistemic certainty. Since one cannot be epistemically certain that one is not a BIV — i.e., one cannot completely rule out the possibility of being a BIV, one cannot know that one is not a BIV. Those who are inclined to hold that knowledge requires epistemic certainty (i.e., infallibilists about knowledge) are likely inclined to accept (1). There are reasonable arguments for epistemic infallibilism. Moreover, (1) is plausible even if one does not assume that knowledge requires epistemic certainty. It seems quite difficult to prove conclusively that one is not a BIV, even if there is no sufficient reason to believe that one is a BIV.
Regarding (2), if one cannot know that one is not a BIV, then for all one knows, one might just be a BIV. In other words, we are not in the epistemic position to rule out conclusively or with non-subjective certainty that we are not brains in vats. It’s at least possible that we are like Neo before he takes the red pill in The Matrix. And if we are brains in vats, then most of what we commonly take ourselves to know on the basis of experience, we don’t know. It might be the case that a computerized matrix (or some other method) is generating all of our experiences of what we take to be the external world. If Jones is a BIV, for example, then Jones doesn’t know that he’s eating a bagel and drinking coffee for breakfast, although he reasonably but fallibly believes that he is doing so. He’s actually a BIV, and the computerized matrix is stimulating his brain to believe (falsely) that he is eating a bagel and drinking coffee for breakfast when in fact, he has no hands, stomach, etc. but is just a brain. Perhaps our beliefs about basic a priori propositions of mathematics and logic and about self-evident matters survive this vatty situation such that we can know such propositions, but it seems that even the most reasonable of a posteriori beliefs about the external world would fail to count as knowledge if we are living in the vatty matrix.
And concerning (3), common sense and practice suggest that we know many propositions beyond the basics of mathematics and logic. And moreover, common sense and the ordinary assumptions of daily life are mostly reliable — at least for practical purposes. We know, for instance, that George Washington was the first POTUS, that vehicles run on gasoline, and that fresh water freezes at 32 degrees Fahrenheit. Of course, it is possible that such propositions are false, but we commonly take them to be true, and there is good evidence for them, which suggests that it’s unlikely that they are false.
Each propositions is defensible. Yet logical consistency demands that you deny one as a condition for accepting the other two. Which would you deny and why?
You might deny (3). That is, on the assumption that knowledge requires epistemic certainty, then although we might not like to admit it, we don’t know (strictly speaking) very much beyond basic a priori propositions of mathematics and logic, matters of self-evidence, and the like. Granted, we can have fallibly reasonable beliefs outside of math and logic, but most of them fall short of counting as precise items of knowledge, although, for practical purposes, we often refer to them as pieces of “knowledge.”
If we don’t know (strictly speaking) very much beyond basic a priori propositions of mathematics and logic, etc., then why shouldn’t we suspend all of our beliefs, as the ancient Pyrrhonists (supposedly) advised? Well, because — in my view, at least — we can still obtain reasonable belief or reasonable acceptance. The ancient Academics (such as Arcesilaus and Carneades) called such beliefs* eulogon (roughly, good reasons or adequately reasonable) and pithanon (plausible given the evidence). Cicero used the term probabilitas (probable beliefs). Since we can obtain such reasonable positions, we need not suspend all belief. We can live in the rational space of reasonable belief, which occupies the tension between belief-suspension and certainty.
You might deny (2). That is, you might say that although we cannot conclusively rule out the possibility of being a BIV, we can still know quite a lot on the basis of experience, since knowledge doesn’t require certainty but only adequate though fallible justification.**
Or you might deny (1). You might say that even though it is logically and even epistemically possible that we are brains in vats, we can practically exclude such possibilities, since they are unlikely. For pragmatic purposes, then, we can know that we are not brains in vats. (Note that “know” in the previous sentence is used in terms of the fallibilist construal of ‘knowledge.’)
*If you don’t want to call them ‘beliefs,’ call them ‘doxastic endorsements’ or ‘faith commitments’ or ‘practically rational judgments’ or something like that.
** The debate between infallibilists and fallibilists is a crucial matter here. If infallibilists are right that knowledge requires epistemic certainty, then it seems one should deny (3), which indicates (at least) a widespread but non-global skepticism about our ability to acquire propositional knowledge. But if fallibilists are right that knowledge doesn’t require certainty but only fallibly adequate justification — that is, if it is possible to know that p despite the fact that p might be false*** — then one is in a position to affirm (3) and reject either (1) or (2).
*** These are called “concessive knowledge ascriptions” and the awkwardness of such descriptions can be explained in such a way as to enable to development of an argument for infallibilism.