1.1.: You might want to have a look at group of positions in metaethics called person affecting views, some of which include future people and some of which don’t. The ones that do often don’t care about increasing/decreasing the number of people in the future, but about improving the lives of future people that will exist anyway. That’s compatible with longtermism—not all longtermism is about extinction risk. (See trajectory change and s-risk.)
1.2.: No, we don’t just care about humans. In fact, I think it’s quite likely that most of the value or disvalue will come from non-human minds. (Though I’m thinking digital minds rather than animals.) But we can’t influence how the future will go if we’re not around, and many x-risk scenarios would be quite bad full stop and not just bad for humans.
1.3.: You might want to have a look at cluelessness (EA forum and GPI website should have links) or the recent 80,000 Hours podcast with Alexander Berger. Predicting the future and how we can influence it is definitely extremely hard, but I don’t think we’re decisively in bad enough of a position where we can—with a good conscience—just throw our hands up and conclude there’s definitely nothing to be done here.
2.
2.1 + 2.2.: Don’t really want to write anything on this right now
2.3.: Definite no. It just argues that trade-offs must be made, and some bads are worse even than current suffering. Or rather: The amount of bad we can avert is greater even than if we focus on current suffering
2.4: Don’t understand what you’re getting at.
3.
3.1.: Can’t parse the question
3.2.: I think many longtermists struggle with this. Michelle Hutchinson wrote a post on the EA forum recently on what still keeps her motivated. You can find it by searching her name ont he EA forum.
3.3.: No. Longtermism per se doesn’t say anything about how much to personally sacrifice. You can believe in longtermism + think that you should give away your last penny and work every waking hour in a job you don’t like. You can not be a longtermist and think you should live a comfortable, expensive life because that’s what’s most sustainable. Some leanings on this question might correlate with whether you’re a longtermist or not, but in principle, this question is orthogonal.
Sorry if the tone is brash. If so, that’s unintentional, and I tend to be really slow otherwise, but I appreciate that you’re thinking about this. (Also, I’m writing this as sleep procrastination, and my guilt is driving my typing speed)
1.
1.1.: You might want to have a look at group of positions in metaethics called person affecting views, some of which include future people and some of which don’t. The ones that do often don’t care about increasing/decreasing the number of people in the future, but about improving the lives of future people that will exist anyway. That’s compatible with longtermism—not all longtermism is about extinction risk. (See trajectory change and s-risk.)
1.2.: No, we don’t just care about humans. In fact, I think it’s quite likely that most of the value or disvalue will come from non-human minds. (Though I’m thinking digital minds rather than animals.) But we can’t influence how the future will go if we’re not around, and many x-risk scenarios would be quite bad full stop and not just bad for humans.
1.3.: You might want to have a look at cluelessness (EA forum and GPI website should have links) or the recent 80,000 Hours podcast with Alexander Berger. Predicting the future and how we can influence it is definitely extremely hard, but I don’t think we’re decisively in bad enough of a position where we can—with a good conscience—just throw our hands up and conclude there’s definitely nothing to be done here.
2.
2.1 + 2.2.: Don’t really want to write anything on this right now
2.3.: Definite no. It just argues that trade-offs must be made, and some bads are worse even than current suffering. Or rather: The amount of bad we can avert is greater even than if we focus on current suffering
2.4: Don’t understand what you’re getting at.
3.
3.1.: Can’t parse the question
3.2.: I think many longtermists struggle with this. Michelle Hutchinson wrote a post on the EA forum recently on what still keeps her motivated. You can find it by searching her name ont he EA forum.
3.3.: No. Longtermism per se doesn’t say anything about how much to personally sacrifice. You can believe in longtermism + think that you should give away your last penny and work every waking hour in a job you don’t like. You can not be a longtermist and think you should live a comfortable, expensive life because that’s what’s most sustainable. Some leanings on this question might correlate with whether you’re a longtermist or not, but in principle, this question is orthogonal.
Sorry if the tone is brash. If so, that’s unintentional, and I tend to be really slow otherwise, but I appreciate that you’re thinking about this. (Also, I’m writing this as sleep procrastination, and my guilt is driving my typing speed)