An Example of the Harm Objection to Consequentialism
Consequentialism is roughly the position in moral philosophy that the consequences or results of an act make that act either morally right or morally wrong. If the results are sufficiently beneficial, then the act is right. If the results fail to be sufficiently beneficial, then the act is wrong. On this view, actions themselves are neither intrinsically right nor wrong. Instead, they are made right or wrong by their results. I.e., the end either justifies or condemns the means used to achieve it.*
A common objection to this view is that it can be used to justify any act whatsoever, including the most harmful of acts. I call this the harm objection. Here is an example:
Suppose that Smith wants to invest $500K in a technology that, if he were to invest the money, would be used to increase oil production at a local refinery and thus decrease gas prices and heating oil prices in the surrounding cities and towns. But the technology can be used to obtain this result only if Smith invests the money. Smith recognizes the beneficial results this technology would engender for the people in his area. The problem is that Smith doesn’t have $500K to invest.
However, Smith knows that his neighbor, Jones, keeps $500K locked in a safe in his house. So Smith breaks into Jones’ house, beats him severely, ties him to a chair, holds a gun to his head, and forces Jones to hand over the money. Smith then uses the cash to invest in the technology. When gas and heating oil prices drop significantly, which in turn generates further benefits for the community, he reasons to himself that his actions were morally acceptable given the good results for the people in his area.
Consequentialist theories, such as utilitarianism, would seem to entail that Smith’s action is not only morally justified, but also morally obligatory. Since this entailment seems absurd given our moral intutions that such actions are wrong, one has a reason to reject consequentialism.
Non facias malum ut inde fiat bonum.
Some might respond that this sort of objection is a problem for act utilitarianism, but not necessarily for rule utilitarianism.** But there is a concern that rule utilitarianism might collpase into act utilitarianism, which would weaken that response.
Others might reply by biting the bullet and claiming that our moral intuitions are false and that it’s morally acceptable and even obligatory to harm the few to benefit the many. This is an implausible rejoinder, in my view.
Still others might respond by agreeing that it’s wrong to harm the few to benefit the many but that, despite appearances, harming the few never produces the best results for the many. However, such a claim raises what I call the predictabilility problem: no human being is cognitively equipped to know or have a sufficient number of reasonable beliefs about the long-term consequences of one’s actions. This fact about the limits of our epistemic capacity counts against utilitarianism.
In short, consequentialist theories seem simplistic. They reduce the moral life to actions and their results. But there is much more to think about: rights, justice, virtues, vices, dignity, respect, etc. Morality is too complex for consequentialism.
I grant that there is much more to say about this topic and that this post is far from being demonstrative.
*This description of consequentialism highlights its metaethical features. In terms of normative ethics, according to consequentialist views, one ought to do whatever generates the most benefit, either for the individual (moral egoism) or for as many as possible (utilitarianism).
**Roughly, act utilitarianism is the view that, for any morally significant choice, we should select the option that will generate the greatest net benefit. According to rule utilitarianism, for any morally significant choice, (i) a specific action is morally permissible only if it conforms to a justified moral rule; and (ii) a moral rule is justified only if its adoption in our moral decision-making would generate better overall consequences (i.e., utility, benefits) than the adoption of other possible rules (or no rule at all). (See Stephen Nathanson’s Act and Rule Utilitarianism for a detailed discussion.)
By the way, I was once at a philosophical conference. A presenter was discussing Kantian ethics. A philosopher in the audience posed a question, which he prefaced by saying that act utilitarianism is a “loony position.” How’s that for candor?