Ethics
PHIL 360
Utilitarianism by John Stuart Mill §
Good outcomes:
- health
- security
- the grind
- education
- relationships
satisfaction of desire vs pleasant experiences
the happiness of everyone that the actions will affect
utilitarianism: indifferent as to person and time. agent’s happiness just as, but no more important than everyone else’s.
predictions further down the road are less certain
utilitarianism requires a lot of data
- who will be affected
- how will they be affected
- how much happiness will this give them
- downstream affects and happiness
mills be like: higher pleasures > lower pleasures (no exceptions)
- criterion of rightness: does it maximize utility?
- decision procedure (“how should i make moral decisions?”)
cases where we can maximize utility by going against the rules of common morality
- anesthesiologist
- grading
- betraying friend
maximizing utility by betraying common morality is uncertain, and therefore doesn’t maximize utility
Bernard Williams §
- George and his job dilemma
- Jim, Pedro, and the Indians in S. America
integrity, the agents’ values. alienation: actions no longer flow from desires, commitments, values.
coerced to go against value system.
- integrity
- alienation
- integrity
- moral autonomy
- our own projects
- people
- negative responsibility
utilitarianism committed to negative responsibility (only the outcome matters). but there’s only so much you can do. no time for the shit you wanna do (aka your own projects).^1
additionally, alienation from other people. people that matter vs utilitarianism. utilitarianism be like: everyone is worth the same.
Fried’s hypothetical: sinking boat, life raft. wife vs stranger.
- shouldn’t save: bruh
- should save: the utilitarian calculus is “one thought too many”
Peter Railton §
What morality requires vs what our relationships require
Morality doesn’t dominate all reasons (e.g. personal attachment)
Claim: Can be utilitarian without making decisions based on what will maximize utility.
-
objective consequentialism: criterion of rightness
-
subjective consequentialism: decision procedure
decide by figuring out what will maximize utility
objective, but not subjective consequentialist: “sophisticated consequentialist”. alternative decision procedure: loved ones >> others. un-alienated (but if they thought that this didn’t maximize consequences, they’d use a different decision procedure).
Mill: not subjective. guideposts: common sense morality. will usually maximize utility.
- tennis player
- hedonism paradox
john. “people should …”. “better position to help”. general, impersonal, alienated.
juan. love, likes. personal. good is maximized when people pay attention to people they care about. iho: not having one-thought-too-many maximizes utility. more of a utilitarian than a loving husband.
sometimes consequentialist deliberation is self-defeating
- is it true that answering based on emotional attachment will maximize good consequences? case-by-case?
- is juan’s set of attitudes consistent? both attending to their wants/needs and being committed to switching attitudes if it didn’t maximize utility
talking to person,
- darting eyes. if someone more interesting spotted, abort.
- if phone rings, abort.
Rule-Consequentialism by Brad Hooker §
decision procedure. which rules, if generally accepted, would bring most good consequences. when applying, use the consequence-maximizing ruleset.
fewer counter-intuitive conclusions than act consequentialism etc. likely to include intuitive rules. a la Mill and his signposts.
Who | Criterion of Rightness | Decision Procedure |
---|---|---|
Mill | Utility maximizing | Signposts. Common sense morality |
Hooker | Rules | Rules |
utility principle → rules → apply rules
stages:
- use utility to select a set of rules
- use rules:
- as a criterion of rightness
- as a decision procedure
in particular cases, applying the rules will not always maximize utility.
the relevant rules are the ones that will promote all the good stuff (whatever the conception of “good stuff” might be).
when some rules will result in an unfair distribution of good stuff, hooker won’t go for those rules. if generally accepted:
- a lotta good
- fair distribution of good
general acceptance:
- general compliance
- general internalization (aka a disposition to comply)
internalization brings about some good consequences that mere compliance can’t cause. deterrence.
collapse objection: rule consequentialism is just act consequentialism with extra steps.
internalization of rules, not just compliance. bringing people to accept the rules will have costs. every generation. these costs may outweigh utility. less demanding rules will be easier to instil. therefore no collapse.
less than 100% compliance. rule-breakers and amoralists. rules for dealing with rule-breakers.
exception for disasters. therefore not too rigid.
Alastair Norcross §
Ranking of actions from best to worst. No “right” or “wrong”; only better and worse. No duties, and thus no supererogation. Not action-guided. Praiseworthiness, but no blameworthiness.
Immanuel Kant §
shine forth as a jewel
Morality is grounded in rationality.
- theoretical rationality: what’s true?
- practical rationality: what should i do?
morality of requirements and prohibitions.
fundamental overarching principle of morality. different formulations.
- formula of universality: act only on maxims that you could will to be universal laws. categorical imperative.
- formula of humanity: always treat rational agents as ends in themselves.
- always act as though you’re legislating for a kingdom where everyone is an end in themselves.
good will has moral worth of itself. concentrating on the will with which someone acts. clearly distinct from consequentialism.
-
actions which are done out of inclination
desire, urges, attitudes of affection/benevolence/hostility. basically anything but duty.
-
actions which are done out of duty
not because they want to achieve something by performing it, but because they recognize that they have an obligation to perform it.
Kant: actions have moral worth when they’re performed out of duty.
requirements of morality are categorical.
opposite of Aristotle, in a way. Kant: recognize and respond to obligation, and not affect/habit.
overdetermination? Kant: duty without inclination. charitable reading: duty is sufficient (and, of course, necessary).
autonomous vs heteronomous. autonomous: free and guided by judgement. heteronmous: unfree. heteronomous actions don’t count.
hostility and sympathy don’t have source in our judgement.
acts done out of duty are autonomous.
moral worth stems from the non-contigious feature of us: rationality.
- categorical imperative: a command which applies regardless of what desires/inclinations you may have
- hypothetical imperative: a command that holds on the assumption that you have certain desires/inclinations. you don’t know what desires any particular person is going to have
[…] for only their relation to a particularly constituted faculty of desire in the subject gives them their worth. And this worth cannot, therefore, afford any universal principles for all rational beings or valid and necessary principles for every volition
autonomous actions are performed under categorical imperatives.
heteronomous actions are performed under hypothetical imperatives.
motive | action | imperative |
---|---|---|
duty | autonomous | categorical |
inclination | heteronomous | hypothetical |
- why should i believe that?
- theoretical reason
- why should i do that?
- practical reason
- what do you have most reason to do?
kant: if you have reason to do something, everyone else in those same circumstances would also have reason to do the same thing. reasons: universality. else, irrational.
le golden rule: “do unto others as you would have them do unto yourself”. taken literally, imperatives dependent on your desires. consider instead, the negative version. for kant, your obligations can’t depend on what you want. therefore the categorical imperative is not the same as le golden rule.
examples of principles of action that can’t be willed to be universal laws:
- [active] [self] commit sudoku when life is painful
- [active] [others] make promises you can’t keep
- [passive] [self] neglect your talents
- [passive] [others] don’t help those who need help
to will the end is to will the means
- can’t conceive
- can conceive, but can’t will
lying, grade inflation, theft
kant: you’re identical across time.
universality: testing reasons
perfect duty (promises, not lying) vs imperfect duty (aid, talents)
universality == humanity — “don’t make an exception of yourself”
- coercion
- deception
The Right to Lie by Christine Korsgaard §
kant rigid.
- knowledge
- freedom
what would be wrong if people didn’t believe others: (words would lose meaning) vs (we couldn’t use words to accomplish goals)
universalize maxim about lying to murderers because they don’t know that you know that they’re lying.
lying to murderer if they don’t lie to you?
should we let those who aren’t committed to following rules an advantage over us?
ideal (formula of humanity) and non-ideal (formula of universalizability)
Maria Von Herbert’s Challenge to Kant by Rae Langton §
Maria: Kantian saint
duty: mastering your inclinations
Moral Luck by Thomas Nagel §
existence of moral luck implies blameworthiness/praiseworthiness for actions we have no control over
forms of moral luck:
-
consequential
luck in the consequences of what you do
- baby-being-bathed; caretaker leaves to accept a call
- driving; yellow light; sped up
“equally blameworthy, but in one case we can let it slide”
-
causal
- no control over actions
- no control over outcomes
bad luck of the bad actor that they acted badly, and vice-versa. moral luck.
-
constitutive
luck in being the kind of person you are
-
circumstantial
whether or not someone does a bad thing depends on the circumstances they find themselves in
e.g. germans that were complicit in the nazi regime were blameworthy. many of us have never found ourselves in such a position
those who did are blameworthy; those who didn’t but would’ve in that situation are morally lucky
nagel’s observation: we often hold people responsible for things they can’t control
can one control their character? they should attempt to improve it over time
control principle: people can be blamed for their vices only to the extent they have control over them
to take steps towards eliminating a vice, you must see it as a vice
control principle vs circumstantial luck
- degree of risk
- degree of unease
Morality, Objectivity and Knowledge §
cultural relativism: about what is the case
moral relativism: about what ought to be the case
challenge: to go from cultural relativism to moral relativism
The Challenge of Cultural Relativism by James Rachels §
an argument:
- [cultural relativism] different societies have different moral codes
- there is no objective, absolute truth about what’s right or wrong
- [moral relativism] the right thing to do is whatever your society’s moral code says is right
society of abundance vs society of scarcity
different attitudes explained by different material circumstances. same values though?
[1]? §
[1] implies [2]? §
[2] implies [3]? §
in fact, [2] and [3] are contradictory?? §
so there is a right thing to do?? [3] would also mean that there couldn’t be any moral progress
also, what’s the “society’s moral code” anyway? several contradictory values. how does one extract guidance from this?
But I Could Be Wrong by George Sher §
- [controversy] as just another person, I have no special reason to trust my own moral judgements more than anyone else’s.
- [contingency] since a different upbringing and set of experiences would have caused me to have a different moral outlook, my moral outlook’s just an accident. it might have nothing to do with the truth or validity of it.
not questioning if there’s objective truth, but questioning whether and why one’s positions are valid.
sorta related to skepticism, but more urgent/consequential.
Objectivity in Ethics §
is there objectivity to moral beliefs?
In every system of morality which I have hitherto met with, I have always remarked that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a god, or makes observations concerning human affairs; when of a sudden I am surprised to find that instead of the usual copulations of propositions is and is not, I meet with no proposition that is not connected with an ought or an ought not. This change is imperceptible, but is, however, of the last consequence. For as this ought or ought not expresses some new relation or affirmation, it is necessary that it should be observed and explained; and at the same time that a reason should be given for what seems altogether inconceivable, how this new relation can be a deduction from others which are entirely different from it. (Hume)
how things are ?→? how things ought to be
is-ought problem aka fact-value gap.
The Emotive Theory of Ethics by A. J. Ayer §
non-cognitivism: statements about morality have no cognitive content not fact. just emotive content
cognitivism: claims about ethics try to state facts
“is wrong” doesn’t describe the way things are. we express our attitudes (or emotions) and try to get others to share them.
- [negative] normative claims aren’t trying to say anything true
- [positive] when we use normative language, we just express our approval or disapproval
Ayer: empiricist wrt language (particularly, moral discourse). any utterance that can’t be verified by sensory observation doesn’t mean anything. “Verifiability principle”
- by definition?
- verifiable by sensory observation?
definition: denying it must be self-contradictory
expressing an attitude vs asserting that you have an attitude
verifiability principle is widely rejected. no sentence is verifiable or falsifiable by itself. can add-on other explanations to defend statements.
apply the verifiability principle to itself
The Subjectivity of Values by J. L. Mackie §
cognitivist: moral statements have cognitive content.
Ayer: we aren’t even trying to express a truth. we’re expressing an attitude.
Mackie: ordinary moral language is trying to express a truth (but failing).
error theory. we’re making the mistake of thinking that there are moral values out there that would make our beliefs true or false.
-
argument from relativity
if there’s a truth, why don’t different cultures converge? because there is no truth. controversy -> no moral truth
even if there’s convergence on some issues, there are disagreements over other issues. not good enough.
-
argument from queerness
assume moral claims are true. what makes them true must be independent of our minds (i.e. extrinsic). moral beliefs have normative content. the kind of fact whose mere existence would give you reason to do something would be “very strange”. it must not just give you a reason, but also motivate you. since this fact would be very queer, it’s unlikely to exist.
“when we judge things to be good, it’s because we want them”
L + ratio + quit projecting
The Objectivity of Values by Thomas Nagel §
we move to more objectivity by stepping back from our beliefs and forming a picture of the way things are that considers the effects of our beliefs on our views
objectivity vs realism. objectivity is a feature of our beliefs. realism: our beliefs are true for extrinsic reasons.
Mackie denies realism; Nagel accepts realism, but distinguishes it from objectivity
to Nagel, there are degrees of objectivity.
Perfectionism by Thomas Hurka §
Psychological Egoism by Joel Feinberg §
by helping others they’d be helping themselves. guilt-alleviation.
-
moral education
rewards/punishments motivate us for self-interest reasons
-
self-deception
-
desire as motivation
“we only do things that we desire”
subject vs object of desire. conflating aim and consequences.
lincoln and the pigs. why would you not have piece of mind?
-
helping others makes us happy
e.g. warm glow of thinking of yourself as a good person by helping vs just taking a pill