I’m quite happy that you are thinking critically about what you are reading! I don’t think you wrote a perfect criticism (see below), but the act of taking the time to write a criticism and posting it to a public venue is not an easy step. EA always needs people who are willing and eager to probe its ethical foundations. Below I’m going to address some of your specific points, mostly in a critical way. I do this not because I think your criticism is bad (though I do disagree with a lot of it), but because I think it can be quite useful to engage with newer people who take the time to write reasonably good reactions to something they’ve read. Hopefully, what I say below is somewhat useful for understanding the reasons for longtermism and what I see as some flaws in your argument. I would love for you to reply with any critiques of my response.
This has been quoted several times, even though it’s an absurd argument on its face. Imagine the world where Cleopatra skipped dessert. How does this cure cancer?
It doesn’t, and that’s not Parfit’s point. Parfit’s point is that if one were to employ a discount rate, Cleopatra’s dessert would matter more than nearly anything today. Since (he claims) this is clearly wrong, there is something clearly wrong with a discount rate.
Most of the 80000 hours article attempts to persuade the reader that longtermism is morally good, by explaining the reasons that we should consider future people. But the part about how we are able to benefit future people is very short.
Well yes, but that’s because it’s in the other pages linked there. Mostly, this has to do with thinking about whether existential risks exist soon, and whether there is anything we can do about them. That isn’t really in the scope of that article but I agree the article doesn’t show it.
The world is a complex system, and trying to affect the far future state of a complex system is a fool’s errand.
That isn’t entirely true. There are some things that routinely affect the far future of complex systems. For instance, complex systems can collapse, and if you can get them to collapse, you can pretty easily affect its far future. If it’s about to collapse due to an extremely rare event, then preventing that collapse can affect its far future state.
Let’s look at a few major breakpoints and see whether longtermism was a significant factor.
Obviously, it wasn’t. But of course it wasn’t! There wasn’t even longtermism at all, so it wasn’t a significant factor in anyone’s decisions. Maybe you are trying to say “people can make long term changes without being motivated by longtermism.” But that doesn’t say anything about whether longtermism might make them better at creating long term changes than they otherwise would be.
We can achieve longtermism without longtermism
I generally agree with this and so do many others. For instance see here and here. However, I think it’s possible that this may not be true at some time in the future. I personally would like to have longtermism around, in case there is really something where it matters, mostly because I think it is roughly correct as a theory of value. Some people may even think this is already the case. I don’t want to speak for anyone, but my sense is that people who work on suffering risk are generally considering longtermism but don’t care as much about existential risk.
The main point is that intervening for long term reasons is not productive, because we cannot assume that interventions are positive. Historically, interventions based on “let’s think long term”, instead of solving an immediate problem, have tended to be negative or negligible in effect.
First, I agree that interventions may be negative, and I think most longtermists would also strongly agree with this. In terms of whether historical “long term” interventions have been negative, you’ve asserted it but you haven’t really shown it. I would be very interested in research on this; I’m not aware of any. If this were true, I do think that would be a knock against longtermism as a theory of action (though not decisive, and not against longtermism as a theory of value). Though it maybe could still be argued that we live at “the hinge of history” where longtermism is especially useful.
I made some distinguishment between theory of value and theory of action. A theory of value (or axiology) is a theory about what states of the world are most good. For instance, it might say that a world with more happiness, or more justice, is better than a world with less. A theory of action is a theory about what you should do; for instance, that we should take whichever action produces the maximum expected happiness. Greaves and MacAskill make the case for longtermism as both. But it’s possible you could imagine longtermism as a theory of value but not a theory of action.
For instance, you write:
Some blood may be shed and lives may be lost, but the expected value is strongly positive.
Various philosophers, such as Parfit himself, have suggested that for this reason, many utilitarians should actually “self-efface” their morality. In other words, they should perhaps start to believe that killing large numbers of people is bad, even if it increases utility, because they might simply be wrong about the utility calculation, or might delude themselves into thinking what they already wanted to do produces a lot of utility. I gave some more resources/quotes here.
Thanks ThomasWoodside! I noticed the forum has relatively low throughput so I decided to “learn in public” as it were :)
I understand the Cleopatra paragraph now and I’ve edited my post. I wasn’t able to understand his point before, so I got it wrong. Thanks for explaining it!
Obviously, it wasn’t. But of course it wasn’t! There wasn’t even longtermism at all, so it wasn’t a significant factor in anyone’s decisions. Maybe you are trying to say “people can make long term changes without being motivated by longtermism.” But that doesn’t say anything about whether longtermism might make them better at creating long term changes than they otherwise would be.
This is a good point. I wanted to show “longtermism is not necessary for long term changes”, which I think is pretty likely. The more venturesome idea is “longtermism would not make better long term changes”, and those examples don’t address that point.
My intuition is that a longtermism mindset likely would not have a significant positive impact (such as the imaginary examples I wrote), but it’s pretty hard to “prove” that because we don’t have a counterfactual history. We could go through historical examples of people with longterm views (in journals and diaries?), and see whether they had positive or negative impact. That might be a big project though.
I generally agree with this and so do many others. For instance see here and here.
These are really good links, thank you!
In terms of whether historical “long term” interventions have been negative, you’ve asserted it but you haven’t really shown it. I would be very interested in research on this; I’m not aware of any. If this were true, I do think that would be a knock against longtermism as a theory of action (though not decisive, and not against longtermism as a theory of value). Though it maybe could still be argued that we live at “the hinge of history” where longtermism is especially useful.
Same! I agree this is a weakness of my post. Theory of action vs theory of value is a good concept—I don’t have a strong view on longtermism as a theory of value, I mostly care about the theory of action.
I’m quite happy that you are thinking critically about what you are reading! I don’t think you wrote a perfect criticism (see below), but the act of taking the time to write a criticism and posting it to a public venue is not an easy step. EA always needs people who are willing and eager to probe its ethical foundations. Below I’m going to address some of your specific points, mostly in a critical way. I do this not because I think your criticism is bad (though I do disagree with a lot of it), but because I think it can be quite useful to engage with newer people who take the time to write reasonably good reactions to something they’ve read. Hopefully, what I say below is somewhat useful for understanding the reasons for longtermism and what I see as some flaws in your argument. I would love for you to reply with any critiques of my response.
It doesn’t, and that’s not Parfit’s point. Parfit’s point is that if one were to employ a discount rate, Cleopatra’s dessert would matter more than nearly anything today. Since (he claims) this is clearly wrong, there is something clearly wrong with a discount rate.
Well yes, but that’s because it’s in the other pages linked there. Mostly, this has to do with thinking about whether existential risks exist soon, and whether there is anything we can do about them. That isn’t really in the scope of that article but I agree the article doesn’t show it.
That isn’t entirely true. There are some things that routinely affect the far future of complex systems. For instance, complex systems can collapse, and if you can get them to collapse, you can pretty easily affect its far future. If it’s about to collapse due to an extremely rare event, then preventing that collapse can affect its far future state.
Obviously, it wasn’t. But of course it wasn’t! There wasn’t even longtermism at all, so it wasn’t a significant factor in anyone’s decisions. Maybe you are trying to say “people can make long term changes without being motivated by longtermism.” But that doesn’t say anything about whether longtermism might make them better at creating long term changes than they otherwise would be.
I generally agree with this and so do many others. For instance see here and here. However, I think it’s possible that this may not be true at some time in the future. I personally would like to have longtermism around, in case there is really something where it matters, mostly because I think it is roughly correct as a theory of value. Some people may even think this is already the case. I don’t want to speak for anyone, but my sense is that people who work on suffering risk are generally considering longtermism but don’t care as much about existential risk.
First, I agree that interventions may be negative, and I think most longtermists would also strongly agree with this. In terms of whether historical “long term” interventions have been negative, you’ve asserted it but you haven’t really shown it. I would be very interested in research on this; I’m not aware of any. If this were true, I do think that would be a knock against longtermism as a theory of action (though not decisive, and not against longtermism as a theory of value). Though it maybe could still be argued that we live at “the hinge of history” where longtermism is especially useful.
I made some distinguishment between theory of value and theory of action. A theory of value (or axiology) is a theory about what states of the world are most good. For instance, it might say that a world with more happiness, or more justice, is better than a world with less. A theory of action is a theory about what you should do; for instance, that we should take whichever action produces the maximum expected happiness. Greaves and MacAskill make the case for longtermism as both. But it’s possible you could imagine longtermism as a theory of value but not a theory of action.
For instance, you write:
Various philosophers, such as Parfit himself, have suggested that for this reason, many utilitarians should actually “self-efface” their morality. In other words, they should perhaps start to believe that killing large numbers of people is bad, even if it increases utility, because they might simply be wrong about the utility calculation, or might delude themselves into thinking what they already wanted to do produces a lot of utility. I gave some more resources/quotes here.
Thanks for writing!
Thanks ThomasWoodside! I noticed the forum has relatively low throughput so I decided to “learn in public” as it were :)
I understand the Cleopatra paragraph now and I’ve edited my post. I wasn’t able to understand his point before, so I got it wrong. Thanks for explaining it!
This is a good point. I wanted to show “longtermism is not necessary for long term changes”, which I think is pretty likely. The more venturesome idea is “longtermism would not make better long term changes”, and those examples don’t address that point.
My intuition is that a longtermism mindset likely would not have a significant positive impact (such as the imaginary examples I wrote), but it’s pretty hard to “prove” that because we don’t have a counterfactual history. We could go through historical examples of people with longterm views (in journals and diaries?), and see whether they had positive or negative impact. That might be a big project though.
These are really good links, thank you!
Same! I agree this is a weakness of my post. Theory of action vs theory of value is a good concept—I don’t have a strong view on longtermism as a theory of value, I mostly care about the theory of action.