Mill’s Principle

Was Mill an advocate of act utilitarianism or rule utilitarianism? Which one is better?

(January 2025): I take it that this question was intended, at least in part, to emphasize the distinction between multi-level act utilitarianism and genuine rule utilitarianism (as these are popularly conflated). But, surprisingly, Mill does seem to be a rule-utilitarian after all! Today, I would at the very least reword some of the discussion of act versus rule utilitarianism, which I think is slightly misguided. For instance, the best argument for expected-value consequentialism isn’t that it avoids uncertainty (it doesn’t), but just by extensional adequacy (see here for some discussion).

Act Utilitarianism (AU) and Rule Utilitarianism (RU) differ by their criteria of moral right. A token act is morally right insofar as (and because):

(AU) the performance of would make things best.

(RU) is of some type A such that the performance of all A-acts would make things best.

How to cash out ‘best’ is controversial within utilitarianism and is not my focus here; but at first pass, it’s something like the promotion of aggregate well-being. I call ‘optimal’ the token acts which, if done, would make things best; and (loosely following Parfit, but a bit idiosyncratically) ‘optimific’ the token acts of a type which, if acts of that type were generally done, things would be best. That is, (AU) says to act optimally, and (RU) says to act optimifically. Notice that the difference between (AU) and (RU) has nothing to do with explicit reference to rules; both criteria can be stated with or without such reference.

Mill’s rule-utilitarianism

In his Utilitarianism, Mill assumes that optimific and optimal acts are identical, so it might be best to say that he advocates utilitarianism generally; however, he holds that acts are right in virtue of being optimific rather than in virtue of being optimal, so his position is more consistent with ru. Mill claims that everyone acting optimally would be the ‘nearest approach’ to ‘the ideal perfection of utilitarian morality’; i.e., that optimal acts are optimific:

To do as one would be done by, and to love one’s neighbour as oneself, constitute the ideal perfection of utilitarian morality. As the means of making the nearest approach to this ideal, utility would enjoin
that a direct impulse to promote the general good may be in every individual one of the habitual motives of action, and the sentiments connected therewith may fill a large and prominent place in every human being’s sentient existence.

Mill also argues that it’s never actually best to violate the rules that would be best if they bound everyone; i.e., that optimific rules are optimal:


we feel that the violation, for a present advantage, of a rule of such transcendent expediency, is not expedient


If the competing criteria of moral right aligned in this way, the distinction would not be so important; Mill might be right to conflate the two. Now, more controversially, I argue that Mill still ultimately aligns more with (RU). His arguments against naive (AU) (which requires directly evaluating the rightness of acts) are suggestive of multi-level (AU) (which allows using heuristic rules where appropriate):

To inform a traveller respecting the place of his ultimate destination, is not to forbid the use of landmarks and direction-posts on the way.

But he holds that morality is fundamentally about optimific rule-following (i.e., it’s a secondary fact that the rules must be optimal). The rules may be generally limited:

In order that the exception may not extend itself beyond the need, and may have the least possible effect in weakening reliance on veracity, it ought to be recognized, and, if possible, its limits defined; and if the principle of utility is good for anything, it must be good for weighing these conflicting utilities against one another, and marking out the region within which one or the other preponderates.

But they should never be violated:

In the case of abstinences indeed—of things which people forbear to do, from moral considerations, though the consequences in the particular case might be beneficial—it would be unworthy of an intelligent agent not to be consciously aware that the action is of a class which, if practised generally, would be generally injurious, and that this is the ground of the obligation to abstain from it.

In other words, sometimes is actually (AU)-right but apparently (RU)-wrong. In these cases, either is not actually (RU)-wrong (in which case we should amend what we think is optimific), or a is actually (RU)-wrong (in which case a would be morally wrong; but as we saw, Mill doubts this possibility). The latter feature is not characteristic of multi-level (AU) heuristics, so I read Mill as advocating (RU) instead.

Objections to my reading

Crisp (1997) objects to this conclusion; since I think he successfully undercuts much of Urmson’s justification for reading Mill as supporting (RU), this is worth addressing. First, he claims that this line from Mill is closer to generalized utilitarianism than to rule-utilitarianism. As he sees it,

Another version of indirect utilitarianism, which one might call utilitarian generalization, makes no essential reference to rules, but is structurally very similar to rule utilitarianism. It requires that we perform no action which is such that, if people were generally to perform it, welfare would not be maximized.

Lyons (1965) correctly characterizes this as a form of (RU). I’ve stated above (RU) without essential reference to rules, and given that one always does something, I see no difference between ‘do what’s optimific’ and ‘don’t do what’s not optimific’. So this objection doesn’t work. Next, Crisp argues:

It is likely that by ‘obligation’ here, Mill means the sense of obligation which prevents our killing, stealing and so on.

Thus, Crisp claims that this passage isn’t about the criterion of morality. But in a 1859 letter, Mill writes:

I believe that those who have no feeling of right & wrong cannot possibly intue the rightness or wrongness of anything
 I will therefore pass to the case of those who have a true moral feeling, that is, a feeling of pain in the fact of violating a certain rule, quite independently of any expected consequences to themselves. It appears to me that to them the word ought means, that if they act otherwise, they shall be punished by this internal, & perfectly disinterested feeling. Unless they would be so punished, or unless they think they would, any assertion they make to themselves that they ought so to act seems to me to lose its proper meaning.

Thus, even if we grant that Mill means only a sense of obligation to follow a rule, it seems that Mill still takes this as the very foundation of what it means to be moral; it is precisely in virtue of this sense that actions are right or wrong. So this objection also doesn’t work. Finally, Crisp argues:

Mill does not say that the consequences in the case he imagines are beneficial, only that they might be
 Mill is suggesting [one should] at least be prepared to recognize that one should abstain from the sorts of activities that are likely to be very harmful to others.

Mill’s use of ‘might’ is very clearly explained without the need to twist his point into an argument about probable outcomes. Firstly, ‘are’ is inappropriate to describe consequences of merely potential acts (cf. ‘would be generally injurious’). Secondly, Mill goes on to argue that this sort of divergence never actually occurs (with our quote about optimific acts being optimal). So, ‘might’ is the natural word to use when granting an opponent’s assumption for the sake of argument; on Crisp’s reading, we might expect something like ‘could’ instead. Such a reading is interesting and coherent, but doesn’t make much sense given that this argument is about ‘abstinences’; Mill uses the term elsewhere to mean the saving of money:

The essential principle of property being to assure to all persons what they have produced by their labor and accumulated by their abstinence.

This is much more in line with my reading than with Crisp’s: spending money on frivolities might be best on particular occasions, but if practiced in general, would lead to bankruptcy. It’s unclear what ‘abstinences’ have to do with avoiding events that have some chance of benefit but overall negative expected value. So, this objection and alternate reading also don’t work. Mill is simply suggesting that even if one thinks that an optimific act is suboptimal, the optimific act is still morally right; morality is fundamentally about following the optimific rules, and the fact that they turn out to be optimal is a happy accident which enables him to improve on the defect with Kant’s fundamental rule: it isn’t wrong, but it’s ‘grotesquely’ insufficient.

Overall, then, I maintain the position that Mill did not see any practical difference between (AU) and (RU), but rather advocated for what he took to be properly implied by both (i.e., not naive (AU)); and furthermore, that his position is more consistent with (RU) than multi-level (AU).

Act vs. rule utilitarianism

The two criteria I take to be relevant for comparing (AU) and (RU) are extensional adequacy (are optimal/optimific acts the morally right ones?) and intensional adequacy (is being optimal/optimific what it means to be morally right?) of the best versions of those respective theories; so, first I specify these best versions.

Firstly, expected consequences matter. I will not argue that we should directly reason off expected value (i.e., without loss aversion), but simply note that we must have some way to deal with the epistemic (at least) fact that outcomes are fundamentally uncertain. That results of that uncertainty are not up to us. We can try to bring about some results, but must take into account that we may fail in the trying (as Crisp’s probabilistic reading of Mill warns us). Acts, generally, should be construed as ‘trying to act so’; to ask more than this is to ask moral agents to bring about better outcomes than they are able to, and one may as well ask non-agents to do the same. The only acts open to agents are of the form := ‘try to bring about a∗’ where a∗ is some set of expected consequences, so I shall say ‘try to act’ for clarity. To use the intention/motive distinction Mill attributes to Bentham, actions are at least about intentions (and, since (RU) does not have limited scope, I take it that we should act toward our motives as we should act elsewhere: try to fix them such that the best outcomes result). Thus, I make no distinction between successfully complying with a set of rules and sincerely accepting them: the former is either a complete non-starter, or else simply means the latter.

Next, (RU) must be made more robust, or fall a ‘new ideal-world objection’. From Parfit:

Follow the rules whose being followed by everyone would make things go best, unless some other people have not followed these rules, in which case do whatever you like.

This meta-rule may be weakly optimific, but is far from optimal: it is badly calibrated. We must amend (RU) to be robust toward partial compliance, not just full compliance. The simplest way to do this is to always act as if you had at least some threshold of compliance; but this fails ‘minimization’ problems. Consider:

If at least of the people in your village work together, they can raise a new flagpole that will make everyone incredibly happy. But if fewer than people try to raise the flagpole, it will just fall and hurt them.

Optimifically, exactly people should go to raise the flag. But surely, you should not always act as if at least people were compliant. Uncalibrated (RU) is still not robust. Instead, we should calibrate acts to be optimal for the expected level of compliance; but since non-compliance may take many forms (perhaps some people would raise the flag anyway), this should be even more precise: calibrate action toward how you expect others to act.

This does lose out on some results. Sometimes, uncalibrated (RU) gets lucky. For instance, you might not expect many people in your village to raise the flagpole. So, if everyone turned into a calibrated (RU) overnight without anyone else knowing, the flagpole wouldn’t be raised; but if everyone turned into an uncalibrated (RU), the flagpole would be raised. So, being calibrated is not everywhere best; but it is, by design, best in expectation.

So, instead of merely ‘act the way which, if everyone acted that way, would be best’ we have ‘act the way which you expect to be best given how others will act’. But this is better formulated ‘act the way which you expect to be best given all you know’, i.e., ‘act the way which you expect to be best’. In other words, our revised form of (RU) is simply a form of (AU)! They have state the same criteria of moral right — i.e, it’s a single theory that ought to be considered both act-and-rule utilitarian — and thus are also extensionally identical.

Parfit raises a troubling objection about alienation.

If everyone always did whatever would make things go best, everyone’s acts would, in most cases, have the best possible effects*. Things would go better than they would go if everyone always tried to do whatever would make things go best, but such attempts often failed. But the good effects of everyone’s acts would again be outweighed, I believe, by the ways in which it would be worse if we all had the motives that would lead us to follow [AU]. As before, in losing many of our strong loves, loyalties, and personal aims, many of us would lose too much of what makes our lives worth living.

(There is a note at the starred point, where Parfit argues that this isn’t strictly true; but we’ve conceded above that everyone being uncalibrated gets rewarded in the Stag Hunt style case that he raises.) I grant that in the actual world, (AU) requires that we become alienated to the extent that this would maximize aggregate welfare, and that this is at the limit of human tolerability. But if it would be better to form more personal relationships in the ideal world, (AU) would recommend forming personal relationships. The alienation gap is between what’s best for everyone and what’s best for us. But this gap is more tolerable in the ideal world because we now gain benefits from others acting optimally. And, by construction, if we all decreased the gap any more, we’d all feel worse off. One might expect that the right amount of alienation in the ideal world is less than what (AU) requires in the actual world; and if that were so, (AU) would simply require that lower amount of alienation instead. So, I don’t think this objection works.

References

  • Crisp, Roger. Routledge Philosophy Guidebook to Mill on Utilitarianism. Abingdon, Oxon: Routledge, 2009.
  • Foot, Philippa, ed. Theories of Ethics. Oxford: Oxford University Press, 2002.
  • Kagan, Shelly. “Kantianism for Consequentialists.” In Groundwork for the Metaphysics of Morals, 111–156. New Haven: Yale University Press, 2017.
  • Lyons, David. Forms and Limits of Utilitarianism. Reprinted (with corrections). Oxford: Clarendon Press, 1978.
  • Mill, John Stuart. Utilitarianism. Luton: AUK Classics, 2014.
  • Mill, John Stuart, and J. Laurence (James Laurence) Laughlin. Principles of Political Economy. New York: D. Appleton, 1884.
  • Mill, John Stuart, Francis Mineka, and Dwight Lindley. The Later Letters of John Stuart Mill 1849-1873: Volumes XIV-XVII. 1st ed. Vol. v.XIV-XVII. Toronto: University of Toronto Press, 1972.
  • Parfit, Derek. On What Matters: Volume One. Oxford: Oxford University Press, 2011.