Millâs Principle
Was Mill an advocate of act utilitarianism or rule utilitarianism? Which one is better?
(January 2025): I take it that this question was intended, at least in part, to emphasize the distinction between multi-level act utilitarianism and genuine rule utilitarianism (as these are popularly conflated). But, surprisingly, Mill does seem to be a rule-utilitarian after all! Today, I would at the very least reword some of the discussion of act versus rule utilitarianism, which I think is slightly misguided. For instance, the best argument for expected-value consequentialism isnât that it avoids uncertainty (it doesnât), but just by extensional adequacy (see here for some discussion).
Act Utilitarianism (AU) and Rule Utilitarianism (RU) differ by their criteria of moral right. A token act is morally right insofar as (and because):
(AU) the performance of would make things best.
(RU) is of some type A such that the performance of all A-acts would make things best.
How to cash out âbestâ is controversial within utilitarianism and is not my focus here; but at first pass, itâs something like the promotion of aggregate well-being. I call âoptimalâ the token acts which, if done, would make things best; and (loosely following Parfit, but a bit idiosyncratically) âoptimificâ the token acts of a type which, if acts of that type were generally done, things would be best. That is, (AU) says to act optimally, and (RU) says to act optimifically. Notice that the difference between (AU) and (RU) has nothing to do with explicit reference to rules; both criteria can be stated with or without such reference.
Millâs rule-utilitarianism
In his Utilitarianism, Mill assumes that optimific and optimal acts are identical, so it might be best to say that he advocates utilitarianism generally; however, he holds that acts are right in virtue of being optimific rather than in virtue of being optimal, so his position is more consistent with ru. Mill claims that everyone acting optimally would be the ânearest approachâ to âthe ideal perfection of utilitarian moralityâ; i.e., that optimal acts are optimific:
To do as one would be done by, and to love oneâs neighbour as oneself, constitute the ideal perfection of utilitarian morality. As the means of making the nearest approach to this ideal, utility would enjoinâŠthat a direct impulse to promote the general good may be in every individual one of the habitual motives of action, and the sentiments connected therewith may fill a large and prominent place in every human beingâs sentient existence.
Mill also argues that itâs never actually best to violate the rules that would be best if they bound everyone; i.e., that optimific rules are optimal:
âŠwe feel that the violation, for a present advantage, of a rule of such transcendent expediency, is not expedientâŠ
If the competing criteria of moral right aligned in this way, the distinction would not be so important; Mill might be right to conflate the two. Now, more controversially, I argue that Mill still ultimately aligns more with (RU). His arguments against naive (AU) (which requires directly evaluating the rightness of acts) are suggestive of multi-level (AU) (which allows using heuristic rules where appropriate):
To inform a traveller respecting the place of his ultimate destination, is not to forbid the use of landmarks and direction-posts on the way.
But he holds that morality is fundamentally about optimific rule-following (i.e., itâs a secondary fact that the rules must be optimal). The rules may be generally limited:
In order that the exception may not extend itself beyond the need, and may have the least possible effect in weakening reliance on veracity, it ought to be recognized, and, if possible, its limits defined; and if the principle of utility is good for anything, it must be good for weighing these conflicting utilities against one another, and marking out the region within which one or the other preponderates.
But they should never be violated:
In the case of abstinences indeedâof things which people forbear to do, from moral considerations, though the consequences in the particular case might be beneficialâit would be unworthy of an intelligent agent not to be consciously aware that the action is of a class which, if practised generally, would be generally injurious, and that this is the ground of the obligation to abstain from it.
In other words, sometimes is actually (AU)-right but apparently (RU)-wrong. In these cases, either is not actually (RU)-wrong (in which case we should amend what we think is optimific), or a is actually (RU)-wrong (in which case a would be morally wrong; but as we saw, Mill doubts this possibility). The latter feature is not characteristic of multi-level (AU) heuristics, so I read Mill as advocating (RU) instead.
Objections to my reading
Crisp (1997) objects to this conclusion; since I think he successfully undercuts much of Urmsonâs justification for reading Mill as supporting (RU), this is worth addressing. First, he claims that this line from Mill is closer to generalized utilitarianism than to rule-utilitarianism. As he sees it,
Another version of indirect utilitarianism, which one might call utilitarian generalization, makes no essential reference to rules, but is structurally very similar to rule utilitarianism. It requires that we perform no action which is such that, if people were generally to perform it, welfare would not be maximized.
Lyons (1965) correctly characterizes this as a form of (RU). Iâve stated above (RU) without essential reference to rules, and given that one always does something, I see no difference between âdo whatâs optimificâ and âdonât do whatâs not optimificâ. So this objection doesnât work. Next, Crisp argues:
It is likely that by âobligationâ here, Mill means the sense of obligation which prevents our killing, stealing and so on.
Thus, Crisp claims that this passage isnât about the criterion of morality. But in a 1859 letter, Mill writes:
I believe that those who have no feeling of right & wrong cannot possibly intue the rightness or wrongness of anything⊠I will therefore pass to the case of those who have a true moral feeling, that is, a feeling of pain in the fact of violating a certain rule, quite independently of any expected consequences to themselves. It appears to me that to them the word ought means, that if they act otherwise, they shall be punished by this internal, & perfectly disinterested feeling. Unless they would be so punished, or unless they think they would, any assertion they make to themselves that they ought so to act seems to me to lose its proper meaning.
Thus, even if we grant that Mill means only a sense of obligation to follow a rule, it seems that Mill still takes this as the very foundation of what it means to be moral; it is precisely in virtue of this sense that actions are right or wrong. So this objection also doesnât work. Finally, Crisp argues:
Mill does not say that the consequences in the case he imagines are beneficial, only that they might be⊠Mill is suggesting [one should] at least be prepared to recognize that one should abstain from the sorts of activities that are likely to be very harmful to others.
Millâs use of âmightâ is very clearly explained without the need to twist his point into an argument about probable outcomes. Firstly, âareâ is inappropriate to describe consequences of merely potential acts (cf. âwould be generally injuriousâ). Secondly, Mill goes on to argue that this sort of divergence never actually occurs (with our quote about optimific acts being optimal). So, âmightâ is the natural word to use when granting an opponentâs assumption for the sake of argument; on Crispâs reading, we might expect something like âcouldâ instead. Such a reading is interesting and coherent, but doesnât make much sense given that this argument is about âabstinencesâ; Mill uses the term elsewhere to mean the saving of money:
The essential principle of property being to assure to all persons what they have produced by their labor and accumulated by their abstinence.
This is much more in line with my reading than with Crispâs: spending money on frivolities might be best on particular occasions, but if practiced in general, would lead to bankruptcy. Itâs unclear what âabstinencesâ have to do with avoiding events that have some chance of benefit but overall negative expected value. So, this objection and alternate reading also donât work. Mill is simply suggesting that even if one thinks that an optimific act is suboptimal, the optimific act is still morally right; morality is fundamentally about following the optimific rules, and the fact that they turn out to be optimal is a happy accident which enables him to improve on the defect with Kantâs fundamental rule: it isnât wrong, but itâs âgrotesquelyâ insufficient.
Overall, then, I maintain the position that Mill did not see any practical difference between (AU) and (RU), but rather advocated for what he took to be properly implied by both (i.e., not naive (AU)); and furthermore, that his position is more consistent with (RU) than multi-level (AU).
Act vs. rule utilitarianism
The two criteria I take to be relevant for comparing (AU) and (RU) are extensional adequacy (are optimal/optimific acts the morally right ones?) and intensional adequacy (is being optimal/optimific what it means to be morally right?) of the best versions of those respective theories; so, first I specify these best versions.
Firstly, expected consequences matter. I will not argue that we should directly reason off expected value (i.e., without loss aversion), but simply note that we must have some way to deal with the epistemic (at least) fact that outcomes are fundamentally uncertain. That results of that uncertainty are not up to us. We can try to bring about some results, but must take into account that we may fail in the trying (as Crispâs probabilistic reading of Mill warns us). Acts, generally, should be construed as âtrying to act soâ; to ask more than this is to ask moral agents to bring about better outcomes than they are able to, and one may as well ask non-agents to do the same. The only acts open to agents are of the form := âtry to bring about aââ where aâ is some set of expected consequences, so I shall say âtry to actâ for clarity. To use the intention/motive distinction Mill attributes to Bentham, actions are at least about intentions (and, since (RU) does not have limited scope, I take it that we should act toward our motives as we should act elsewhere: try to fix them such that the best outcomes result). Thus, I make no distinction between successfully complying with a set of rules and sincerely accepting them: the former is either a complete non-starter, or else simply means the latter.
Next, (RU) must be made more robust, or fall a ânew ideal-world objectionâ. From Parfit:
Follow the rules whose being followed by everyone would make things go best, unless some other people have not followed these rules, in which case do whatever you like.
This meta-rule may be weakly optimific, but is far from optimal: it is badly calibrated. We must amend (RU) to be robust toward partial compliance, not just full compliance. The simplest way to do this is to always act as if you had at least some threshold of compliance; but this fails âminimizationâ problems. Consider:
If at least of the people in your village work together, they can raise a new flagpole that will make everyone incredibly happy. But if fewer than people try to raise the flagpole, it will just fall and hurt them.
Optimifically, exactly people should go to raise the flag. But surely, you should not always act as if at least people were compliant. Uncalibrated (RU) is still not robust. Instead, we should calibrate acts to be optimal for the expected level of compliance; but since non-compliance may take many forms (perhaps some people would raise the flag anyway), this should be even more precise: calibrate action toward how you expect others to act.
This does lose out on some results. Sometimes, uncalibrated (RU) gets lucky. For instance, you might not expect many people in your village to raise the flagpole. So, if everyone turned into a calibrated (RU) overnight without anyone else knowing, the flagpole wouldnât be raised; but if everyone turned into an uncalibrated (RU), the flagpole would be raised. So, being calibrated is not everywhere best; but it is, by design, best in expectation.
So, instead of merely âact the way which, if everyone acted that way, would be bestâ we have âact the way which you expect to be best given how others will actâ. But this is better formulated âact the way which you expect to be best given all you knowâ, i.e., âact the way which you expect to be bestâ. In other words, our revised form of (RU) is simply a form of (AU)! They have state the same criteria of moral right â i.e, itâs a single theory that ought to be considered both act-and-rule utilitarian â and thus are also extensionally identical.
Parfit raises a troubling objection about alienation.
If everyone always did whatever would make things go best, everyoneâs acts would, in most cases, have the best possible effects*. Things would go better than they would go if everyone always tried to do whatever would make things go best, but such attempts often failed. But the good effects of everyoneâs acts would again be outweighed, I believe, by the ways in which it would be worse if we all had the motives that would lead us to follow [AU]. As before, in losing many of our strong loves, loyalties, and personal aims, many of us would lose too much of what makes our lives worth living.
(There is a note at the starred point, where Parfit argues that this isnât strictly true; but weâve conceded above that everyone being uncalibrated gets rewarded in the Stag Hunt style case that he raises.) I grant that in the actual world, (AU) requires that we become alienated to the extent that this would maximize aggregate welfare, and that this is at the limit of human tolerability. But if it would be better to form more personal relationships in the ideal world, (AU) would recommend forming personal relationships. The alienation gap is between whatâs best for everyone and whatâs best for us. But this gap is more tolerable in the ideal world because we now gain benefits from others acting optimally. And, by construction, if we all decreased the gap any more, weâd all feel worse off. One might expect that the right amount of alienation in the ideal world is less than what (AU) requires in the actual world; and if that were so, (AU) would simply require that lower amount of alienation instead. So, I donât think this objection works.
References
- Crisp, Roger. Routledge Philosophy Guidebook to Mill on Utilitarianism. Abingdon, Oxon: Routledge, 2009.
- Foot, Philippa, ed. Theories of Ethics. Oxford: Oxford University Press, 2002.
- Kagan, Shelly. âKantianism for Consequentialists.â In Groundwork for the Metaphysics of Morals, 111â156. New Haven: Yale University Press, 2017.
- Lyons, David. Forms and Limits of Utilitarianism. Reprinted (with corrections). Oxford: Clarendon Press, 1978.
- Mill, John Stuart. Utilitarianism. Luton: AUK Classics, 2014.
- Mill, John Stuart, and J. Laurence (James Laurence) Laughlin. Principles of Political Economy. New York: D. Appleton, 1884.
- Mill, John Stuart, Francis Mineka, and Dwight Lindley. The Later Letters of John Stuart Mill 1849-1873: Volumes XIV-XVII. 1st ed. Vol. v.XIV-XVII. Toronto: University of Toronto Press, 1972.
- Parfit, Derek. On What Matters: Volume One. Oxford: Oxford University Press, 2011.