What I meant by this was that I think you and Ben both seem to assume that strong longtermists don’t want to work on near-term problems. I don’t think this is a given (although it is of course fair to say that they’re unlikely to only want to work on near-term problems).
Mostly agree here—this was the reason for some of the (perhaps cryptic) paragraphs in the Section “the Antithesis of Moral Progress.” Longtermism erodes our ability to make progress to whatever extent it has us not working on real problems. And, to the extent that it does have us working on real problems, then I’m not sure what longtermism is actually adding.
Also, just a nitpick on terminology—I dislike the term “near-term” problems, because it seems to imply that there is a well-defined class of “future” problems that we can choose to work on. As if there were a set of problems, and they could be classified as either short-term or long-term. But the fact is that the only problems are near-term problems; everything else is just a guess about what the future might hold. So I see it less about choosing what kinds of problems to work on, but a choice between working on real problems, or conjecturing about future ones, and I think the latter is actively harmful.
So I see it less about choosing what kinds of problems to work on, but a choice between working on real problems, or conjecturing about future ones, and I think the latter is actively harmful.
I don’t necessarily see working on reducing extinction risk as wildly speculating about the far future. In many cases these extinction risks are actually thought to be current risks. The point is that if they happen they necessarily curtail the far future.
Longtermism erodes our ability to make progress to whatever extent it has us not working on real problems. And, to the extent that it does have us working on real problems, then I’m not sure what longtermism is actually adding.
I would note that the Greaves and MackAskill paper actually has a section putting forward ‘advancing progress’ as a plausible longtermist intervention! As I have mentioned this is only insofar as it will make the long-run future go well.
Agree with almost all of this. This is why it was tricky to argue against, and also why I say (somewhere? podcast maybe?) that I’m not particularly worried about the current instantiation of longtermism, but what this kind of logic could justify.
I totally agree that most of the existential threats currently tackled by the EA community are real problems (nuclear threats, pandemics, climate change, etc).
I would note that the Greaves and MackAskill paper actually has a section putting forward ‘advancing progress’ as a plausible longtermist intervention!
Yeah—but I found this puzzling. You don’t need longtermism to think this is a priority—so why adopt it? If you instead adopt a problem/knowledge focused ethics, then you get to keep all the good aspects of longtermism (promoting progress, etc), but don’t open yourself up to what (in my view) are its drawbacks. I try to say this in the “Antithesis of Moral Progress” section, but obviously did a terrible job haha :)
If you instead adopt a problem/knowledge focused ethics, then you get to keep all the good aspects of longtermism (promoting progress, etc), but don’t open yourself up to what (in my view) are its drawbacks
Maybe (just maybe) we’re getting somewhere here. I have no interest in adopting a ‘problem/knowledge focused ethic’. That would seem to presuppose the intrinsic value of knowledge. I only think knowledge is instrumentally valuable insofar as it promotes welfare.
Instead most EAs want to adopt an ethic that prioritises ‘maximising welfare over the long-run’. Longtermism claims that the best way to do so is to actually focus on long-term effects, which may or may not require a focus on near-term knowledge creation—whether it does or not is essentially an empirical question. If it doesn’t require it, then a strong longtermist shouldn’t consider a lack of knowledge creation to be a significant drawback.
Mostly agree here—this was the reason for some of the (perhaps cryptic) paragraphs in the Section “the Antithesis of Moral Progress.” Longtermism erodes our ability to make progress to whatever extent it has us not working on real problems. And, to the extent that it does have us working on real problems, then I’m not sure what longtermism is actually adding.
Also, just a nitpick on terminology—I dislike the term “near-term” problems, because it seems to imply that there is a well-defined class of “future” problems that we can choose to work on. As if there were a set of problems, and they could be classified as either short-term or long-term. But the fact is that the only problems are near-term problems; everything else is just a guess about what the future might hold. So I see it less about choosing what kinds of problems to work on, but a choice between working on real problems, or conjecturing about future ones, and I think the latter is actively harmful.
I don’t necessarily see working on reducing extinction risk as wildly speculating about the far future. In many cases these extinction risks are actually thought to be current risks. The point is that if they happen they necessarily curtail the far future.
I would note that the Greaves and MackAskill paper actually has a section putting forward ‘advancing progress’ as a plausible longtermist intervention! As I have mentioned this is only insofar as it will make the long-run future go well.
Agree with almost all of this. This is why it was tricky to argue against, and also why I say (somewhere? podcast maybe?) that I’m not particularly worried about the current instantiation of longtermism, but what this kind of logic could justify.
I totally agree that most of the existential threats currently tackled by the EA community are real problems (nuclear threats, pandemics, climate change, etc).
Yeah—but I found this puzzling. You don’t need longtermism to think this is a priority—so why adopt it? If you instead adopt a problem/knowledge focused ethics, then you get to keep all the good aspects of longtermism (promoting progress, etc), but don’t open yourself up to what (in my view) are its drawbacks. I try to say this in the “Antithesis of Moral Progress” section, but obviously did a terrible job haha :)
Maybe (just maybe) we’re getting somewhere here. I have no interest in adopting a ‘problem/knowledge focused ethic’. That would seem to presuppose the intrinsic value of knowledge. I only think knowledge is instrumentally valuable insofar as it promotes welfare.
Instead most EAs want to adopt an ethic that prioritises ‘maximising welfare over the long-run’. Longtermism claims that the best way to do so is to actually focus on long-term effects, which may or may not require a focus on near-term knowledge creation—whether it does or not is essentially an empirical question. If it doesn’t require it, then a strong longtermist shouldn’t consider a lack of knowledge creation to be a significant drawback.