A few years ago, I made a outline of Evan G. Williams’ excellent philosophy paper, for a local discussion group. It slowly got circulated on the EA internet. Somebody recently recommended that I make the summary more widely known, so here it is.
The paper is readable and not behind a paywall, so I’d highly recommend reading the original paper if you have the time.
Summary
I. Core claim
Assuming moral objectivism (or a close approximation), we are probably unknowingly guilty of serious, large-scale wrong-doing (“ongoing moral catastrophe”).
II. Definition: What is a moral catastrophe? Three criteria:
Must be a serious wrong-doing (closer to wrongful death or slavery than mild insults or inconveniences).
Must be large-scale (instead of a single wrongful execution, or a single man tortured)
Broad swathes of society are responsible through action or inaction (can’t be unilateral unavoidable actions by a single dictator).
III. Why we probably have unknown moral catastrophes. Two core arguments:
The Inductive Argument
Assumption: It’s possible to engage in great moral wrongdoings even while acting in accordance to your own morals, and that of your society.
Basic motivation: an honest, sincere Nazi still seems to be acting wrongly in important ways.
It’s not relevant whether this wrongdoing is due to mistaken empirical beliefs (All Jews are part of a major worldwide conspiracy) or wrong values (Jews are subhuman and have no moral value).
Given that assumption in mind, pretty much every major society in history has acted catastrophically wrongly.
Consider conquistadores, crusaders, caliphates, Aztecs etc. who conquered in the name of God(s), who they called good and just.
It’s unlikely that all of these people in history only professed such a belief, and that all of them were liars instead of true believers.
Existence proof: People can (and in fact do) do great evil without being aware of this.
Us having ongoing moral catastrophes isn’t just a possibility, but probable.
We are not that different from past generations: Literally hundreds of generations have thought that they actually were right and had figured out the One True Morality
As recent as our parents’ generation, it was a common belief that some people have more rights than others because of race, sexuality etc.
We live in a time of moral upheaval, where our morality is very different from our grandparents’.
Even if some generation would eventually figure out All of Morality, the generation that gets everything right is probably a generation whose parents gets almost everything right.
The Disjunctive Argument
Activists are not exempt. Even if all your pet causes come to fruition, this doesn’t mean our society is good, because there are still unknown moral catastrophes.
There are so many different ways that a society could get things very wrong, that it’s almost impossible to get literally everything right.
This isn’t just a minor concern, we could be wrong in ways that are a sizable proportion of how bad the Holocaust is.
There are many different kinds of ways that society could be wrong.
We could be wrong about who has moral standing.(eg. fetuses, animals)
We could be empirically wrong about what harms or hurts people who morally matter (eg. religious indoctrination of children)
We could be right about some obligations but not others.
We can act immorally in paying too much attention and using resources on false moral obligations (a la crusaders)
We could be right about what’s wrong and should be fixed, but wrong at how to prioritize different fixes.
We could be right about what’s wrong, but wrong about what is and is not our responsibility to fix. (eg. poverty, borders)
We could be wrong about the far future (natalism, existential risk)
Within each category, there are multiple ways to go wrong.
Further, some are mutually exclusive. Eg. Pro-lifers could be right and abortion is a great sin, or fetuses don’t matter and it’s greatly immoral to deprive women of their freedom in eg. third trimester abortions.
Unlikely that we’re currently at the golden mean for all of these trade-offs.
Disjunction comes into play.
Even if you believe that we’re 95% right at each major issue, and there are maybe 15 of them, the total probability that we are right is maybe ~.95^15~=46% (LZ: Assumes independence)
In practice, 95% sure we’re right at each major issue seems way too confident, and 15 items too low.
IV. What should we do about it?
Discarded possibility: hedging. If you’re not sure, play it “safe”, morally speaking.
Eg. even if you think farmed animals probably aren’t sentient, or sentience doesn’t morally matter, you can go vegetarian “just in case”
This does NOT generally work well enough because it’s not robust: as noted, too many things can go wrong, some in contradictory directions.
Recognition of Wrongdoing
Actively try to figure out which catastrophic wrongs we’re committing
Research more into practical fields (eg. animal consciousness) where we can be critically wrong
Research more into moral philosophy
Critical: bad to have increased technological knowledge w/o increased moral wisdom
imagine Genghis Khan w/nuclear weapons
These fields must interact
Not enough for philosophers to say that animals are important if they are conscious and for scientists to say that dolphins are conscious but don’t know if this is important...our society must be able to integrate this.
Need marketplace of ideas where true ideas win out
Rapid intellectual progress is critical.
If it’s worth fighting literal wars to defeat the Nazis or end slavery, it’s worth substantial material investment and societal loss to figure out what we’re currently doing wrong.
Implementation of improved values
Once we figure out what great moral wrongs we’ve committed, we want to be able to make moral reparations for past harms, or at least stop doing future harms in that direction as quickly as possible.
To do this, we want to maximize flexibility in material conditions
Extremely poor/war-torn societies would be unable to make rapid moral changes as needed
LZ example: Complex systems built along specific designs are less resilient to shocks, and also harder to change, cf. Antifragile.
In the same way we stock up resources for war preparation, we might want to save up resources for future moral emergencies, so we can eg. pay reparations, or at least quickly make the relevant changes.
LZ: Unsure how this is actually possible in practice. Eg, individuals usually save by investing, and governments save by buying other government’s debt or by investing in the private sector, but it’s unclear how the world “saves” as a whole.
We want to maximize flexibility in social conditions
Even if it’s materially possible to make large changes, society might make such changes very difficult, because inertia and conservatism bias.
Constitutional amendments, for example, are suspect.
V. Conclusion/Other remarks
Counterconsideration One: Building a society that can correct moral catastrophes isn’t the same as actually correcting moral catastrophes.
Counterconsideration Two : Many of the measures suggested above to prepare for correcting moral catastrophes may themselves be evil
e.g. money spent on moral research could have instead been spent on global poverty, building a maximally flexible society might involve draconian restrictions on current people’s rights
However, this is still worth doing in the short term.
The Possibility of an Ongoing Moral Catastrophe (Summary)
Link post
A few years ago, I made a outline of Evan G. Williams’ excellent philosophy paper, for a local discussion group. It slowly got circulated on the EA internet. Somebody recently recommended that I make the summary more widely known, so here it is.
The paper is readable and not behind a paywall, so I’d highly recommend reading the original paper if you have the time.
Summary
I. Core claim
Assuming moral objectivism (or a close approximation), we are probably unknowingly guilty of serious, large-scale wrong-doing (“ongoing moral catastrophe”).
II. Definition: What is a moral catastrophe? Three criteria:
Must be a serious wrong-doing (closer to wrongful death or slavery than mild insults or inconveniences).
Must be large-scale (instead of a single wrongful execution, or a single man tortured)
Broad swathes of society are responsible through action or inaction (can’t be unilateral unavoidable actions by a single dictator).
III. Why we probably have unknown moral catastrophes. Two core arguments:
The Inductive Argument
Assumption: It’s possible to engage in great moral wrongdoings even while acting in accordance to your own morals, and that of your society.
Basic motivation: an honest, sincere Nazi still seems to be acting wrongly in important ways.
It’s not relevant whether this wrongdoing is due to mistaken empirical beliefs (All Jews are part of a major worldwide conspiracy) or wrong values (Jews are subhuman and have no moral value).
Given that assumption in mind, pretty much every major society in history has acted catastrophically wrongly.
Consider conquistadores, crusaders, caliphates, Aztecs etc. who conquered in the name of God(s), who they called good and just.
It’s unlikely that all of these people in history only professed such a belief, and that all of them were liars instead of true believers.
Existence proof: People can (and in fact do) do great evil without being aware of this.
Us having ongoing moral catastrophes isn’t just a possibility, but probable.
We are not that different from past generations: Literally hundreds of generations have thought that they actually were right and had figured out the One True Morality
As recent as our parents’ generation, it was a common belief that some people have more rights than others because of race, sexuality etc.
We live in a time of moral upheaval, where our morality is very different from our grandparents’.
Even if some generation would eventually figure out All of Morality, the generation that gets everything right is probably a generation whose parents gets almost everything right.
The Disjunctive Argument
Activists are not exempt. Even if all your pet causes come to fruition, this doesn’t mean our society is good, because there are still unknown moral catastrophes.
There are so many different ways that a society could get things very wrong, that it’s almost impossible to get literally everything right.
This isn’t just a minor concern, we could be wrong in ways that are a sizable proportion of how bad the Holocaust is.
There are many different kinds of ways that society could be wrong.
We could be wrong about who has moral standing.(eg. fetuses, animals)
We could be empirically wrong about what harms or hurts people who morally matter (eg. religious indoctrination of children)
We could be right about some obligations but not others.
We can act immorally in paying too much attention and using resources on false moral obligations (a la crusaders)
We could be right about what’s wrong and should be fixed, but wrong at how to prioritize different fixes.
We could be right about what’s wrong, but wrong about what is and is not our responsibility to fix. (eg. poverty, borders)
We could be wrong about the far future (natalism, existential risk)
Within each category, there are multiple ways to go wrong.
Further, some are mutually exclusive. Eg. Pro-lifers could be right and abortion is a great sin, or fetuses don’t matter and it’s greatly immoral to deprive women of their freedom in eg. third trimester abortions.
Unlikely that we’re currently at the golden mean for all of these trade-offs.
Disjunction comes into play.
Even if you believe that we’re 95% right at each major issue, and there are maybe 15 of them, the total probability that we are right is maybe ~.95^15~=46% (LZ: Assumes independence)
In practice, 95% sure we’re right at each major issue seems way too confident, and 15 items too low.
IV. What should we do about it?
Discarded possibility: hedging. If you’re not sure, play it “safe”, morally speaking.
Eg. even if you think farmed animals probably aren’t sentient, or sentience doesn’t morally matter, you can go vegetarian “just in case”
This does NOT generally work well enough because it’s not robust: as noted, too many things can go wrong, some in contradictory directions.
Recognition of Wrongdoing
Actively try to figure out which catastrophic wrongs we’re committing
Research more into practical fields (eg. animal consciousness) where we can be critically wrong
Research more into moral philosophy
Critical: bad to have increased technological knowledge w/o increased moral wisdom
imagine Genghis Khan w/nuclear weapons
These fields must interact
Not enough for philosophers to say that animals are important if they are conscious and for scientists to say that dolphins are conscious but don’t know if this is important...our society must be able to integrate this.
Need marketplace of ideas where true ideas win out
Rapid intellectual progress is critical.
If it’s worth fighting literal wars to defeat the Nazis or end slavery, it’s worth substantial material investment and societal loss to figure out what we’re currently doing wrong.
Implementation of improved values
Once we figure out what great moral wrongs we’ve committed, we want to be able to make moral reparations for past harms, or at least stop doing future harms in that direction as quickly as possible.
To do this, we want to maximize flexibility in material conditions
Extremely poor/war-torn societies would be unable to make rapid moral changes as needed
LZ example: Complex systems built along specific designs are less resilient to shocks, and also harder to change, cf. Antifragile.
In the same way we stock up resources for war preparation, we might want to save up resources for future moral emergencies, so we can eg. pay reparations, or at least quickly make the relevant changes.
LZ: Unsure how this is actually possible in practice. Eg, individuals usually save by investing, and governments save by buying other government’s debt or by investing in the private sector, but it’s unclear how the world “saves” as a whole.
We want to maximize flexibility in social conditions
Even if it’s materially possible to make large changes, society might make such changes very difficult, because inertia and conservatism bias.
Constitutional amendments, for example, are suspect.
V. Conclusion/Other remarks
Counterconsideration One: Building a society that can correct moral catastrophes isn’t the same as actually correcting moral catastrophes.
Counterconsideration Two : Many of the measures suggested above to prepare for correcting moral catastrophes may themselves be evil
e.g. money spent on moral research could have instead been spent on global poverty, building a maximally flexible society might involve draconian restrictions on current people’s rights
However, this is still worth doing in the short term.
This work is licensed under a Creative Commons Attribution 4.0 International License.