Notes from graduate seminar with Hilary Greaves and Tomi Francis on whether moderate (‘half-hearted’) or radical (‘thoroughgoing’) non-consequentialist moral theories are more plausible. The format seems to be a presentation with clarificatory questions for the first half of each class, and then substantive discussion on questions as decided by Slido votes.

I find it difficult to write emails sometimes, but I thought my permission email was good. (I basically just use this as a template for all graduate class permission emails.)

Hi Prof. Greaves,

I was wondering if I’d be able to attend your class with Dr. Francis (CC’d) on moderate vs. radical non-consequentialism?

For background, I’m a math & philosophy undergraduate; I’ve covered some connected issues in moral philosophy and decision theory in previous undergraduate and graduate classes, respectively. (I have access to the Canvas site, and I’ve read through the Week 1 notes.)

Thanks for your consideration!

Cheers, Oak

Week 1: Introduction

Pre-reading was a set of introductory notes that Hilary had written; a lot of it is fairly familiar material (‘all of what you might cover in an undergraduate moral philosophy course, and a little bit more besides’), but we were promised that after this week there would be a gear shift into research-level material. So not many content-notes here.

I think there are plausibly non-welfare goods, and I’m not sure what the principled line is for when something becomes non-consequentialist. Talked a bit with Tomi (who I’d met before) in slightly more detail about ruling out ‘gimmicky’ consequentialized non-consequentialist theories; a working hypothesis is that it suffices to require that axiology is sufficiently objective (in particular, not time-relative and not agent-relative).

Submitted a question about whether expected-value consequentialism was really more action-guiding: I can’t perfectly access what my evidence is (there are many failures of negative introspection, where I don’t know that I don’t actually know something), or what it says is best. We can follow objective rules like ‘Add salt’ (without needing ‘Do what, for all you know, would add salt’). It got four upvotes and went third (I considered upvoting my own question, but it wouldn’t have changed how soon it went, so I didn’t). I was going to include a disclaimer about how other arguments for EVC seemed better, but there wasn’t enough space (and asking ‘Could other arguments work better?’ seemed silly, as we saw a better argument in the pre-reading).

The better argument in question: the secondary norm generated by objective consequentialism doesn’t actually match up with EVC, and EVC seems right where they conflict. (Suppose pill A has 1% chance to cure the patient fully, 99% chance to kill the patient painfully; pill B has 100% chance to cure the patient but give her a headache.) Sometimes the right thing to do, given what you know, is not the right thing to do all things considered.

Week 2: Linguistic Objection

Objections to the axiologist’s notions of betterness from ordinary language. Main pre-readings are Thomson (‘The right and the good’) versus Mankowitz (‘Good people are not like good knives’), and there was a cool overview handout, too. We’re interested in reasons for selective skepticism about betterness (by contrast with across-the-board skepticism of all moral notions). This did show me a new reading of ‘good state of affairs’ (read like ‘good knife’), but it’s just obviously not the surface reading (and both readings are pre-theoretically available).

Thomson

Thoimson on Moore:

  • Moore’s story:
    1. There is such a property as goodness
    2. There is such a relation of betterness (comparative goodness)
    3. Rightness is analyzable in terms of betterness (maximizing consequentialism)
  • This leads to paradox of deontology (‘if wrongs are so bad, why shouldn’t you wrong to prevent them?’). Commonly, people will deny (3). But Thomson thinks we should deny (1):
  • First-order (contrast second-order, but consider buck-passing account of goodness next week; first-order might also depend on something even lower) ways of being good have the form ‘good + adjunct’ (‘rice is good to eat’).
  • Second-order ways include virtue properties of acts and people (X’s being good in virtue of X’s being kind, just, etc.) And so moral requirement is derivative from virtue properties.
  • Second-order ways ‘rest on’ first-order ways, especially on goodness for particular people, via virtue consequentialism: is people having trait X good for us? (But goodness for particular people might ground goodness simpliciter).

Greaves on Thomson:

  • Ethics 101 picture includes consequentialism (bring about the best consequences), deontology (obey the rules), and virtue ethics (develop and manifest the virtues)
  • Within virtue ethics, axiology could play two roles.
    • Foundational (which traits count as virtues rather than vices, or neutral traits): determined by what’s conducive to the good. Or, better: which packages of traits count as virtues rather than vices (a murderer having courage would be bad).
    • Content level: one of the virtues is beneficence (aiming to promote the good). But is it a virtue? Is it a matter of aiming to promote the good?
  • But can’t we give a similar account for goodness of states of affairs?
  • What she says about non-designed things (Charles River) seems not to plausibly ground non-welfare goods (beauty, ecosystems, 
) in really non-welfarist terms; leaves open that if people waned the river to be polluted, it would be good for the river to so be. (We might take, say, plants to be designed via evolution by natural selection.)

Mankowitz

  • Historically, people (Aristotle, Kant, and Moore) do think there is a non-relativized notion of goodness.
  • Standard linguistic tests tell us that ‘good’ is polysemous between a moral and non-moral sense. (So moral good is parallel to good-knife-goodness.)
  • Contradiction test:
    • This book is light, but it isn’t light.
    • Bobby Fisher is a (good chess player), but he isn’t a (good) (chess player).
  • Independent lexical relations:
    • ‘Dark’ is an antonym for one sense of ‘light’, but ‘heavy’ is an antonym for the other.
    • ‘Evil’ is an antonym for one sense of ‘good’, (but perhaps ‘poor’ is an antonym for the other?).
  • Etymologically, both moral and non-moral ‘good’ are from Old English ‘god’ (so polysemy, rather than homophony as with ‘bank’ from Early Scandinavian ‘bannke’ versus ‘bank’ from Middle French ‘banke’).

(Interesting to note that ‘person’ in various language sometimes seems to have a moral sense and a non-moral sense, by contrast with ‘human’.)

For the relativized notion to do any work (it’s good to water the plants if you want them to live), you need to discharge the antecedent (i want them to live; so, it’s good to water the plants). But we can just understand non-relative good on the model of ‘it’s good to X’ with no (or a trivial) antecedent.

Week 3: Aggregation

Objections to utilitarianism:

Container: It sounds like the utilitarian only cares about this abstract “total utility” score, and so only cares about people as ‘containers’ of welfare. Double-Counting: We double-count “reasons”, when we consider both the first-order natural facts about goodness-for-people and the [distinct] second-order evaluative facts about goodness (which obtain in virtue of the first-order natural facts obtaining). Redundancy: If you don’t double-count, then there’s no reason to talk about goodness.

Notice that the weighing hueristic for reasons is flawed (Williamson’s numbers example).

A general response:

Overall betterness arises when we consider trade-offs. ‘From an impartial point of view, considerations of individuals’ welfare, taken together and appropriately weighed against one another when they conflict, favor x over y.’

Finite fixed-population utilitarianism might generalize, on necessitism, to give utilitarian (in particular, totalist) verdicts in variable-population cases. But we could also just modify the axioms (it’s not immediately obvious that merely-possible people have welfare level zero, though I think this is quite plausible). But in infinite-welfare (divergent total welfare) cases, we have some impossibilitity theorems (e.g., Amanda Askell). Unsure how infinite-subjects-but-convergent-welfare-sum goes!

The weighted-coin people in the rock case seem like the sort of people who would flip a weighted coin to predict the outcome of a weighted coin. (Nevermind, actually, really good point from Tomi: give everyone an equal chance of being the first person to be saved; then it just so happens that the five are lucky enough to be saved along with any of the others from the five.)

Tomi’s framing of the container objection is quite good: there is no point of view of the universe which really matters, you can’t go beyond what matters for individuals.

(Hilary raised her hand for thinking Partiality is more plausible than Save More Rather Than Less; this was really surprising to me!)

[
]