Agree with almost all of this. This is why it was tricky to argue against, and also why I say (somewhere? podcast maybe?) that I’m not particularly worried about the current instantiation of longtermism, but what this kind of logic could justify.
I totally agree that most of the existential threats currently tackled by the EA community are real problems (nuclear threats, pandemics, climate change, etc).
I would note that the Greaves and MackAskill paper actually has a section putting forward ‘advancing progress’ as a plausible longtermist intervention!
Yeah—but I found this puzzling. You don’t need longtermism to think this is a priority—so why adopt it? If you instead adopt a problem/knowledge focused ethics, then you get to keep all the good aspects of longtermism (promoting progress, etc), but don’t open yourself up to what (in my view) are its drawbacks. I try to say this in the “Antithesis of Moral Progress” section, but obviously did a terrible job haha :)
If you instead adopt a problem/knowledge focused ethics, then you get to keep all the good aspects of longtermism (promoting progress, etc), but don’t open yourself up to what (in my view) are its drawbacks
Maybe (just maybe) we’re getting somewhere here. I have no interest in adopting a ‘problem/knowledge focused ethic’. That would seem to presuppose the intrinsic value of knowledge. I only think knowledge is instrumentally valuable insofar as it promotes welfare.
Instead most EAs want to adopt an ethic that prioritises ‘maximising welfare over the long-run’. Longtermism claims that the best way to do so is to actually focus on long-term effects, which may or may not require a focus on near-term knowledge creation—whether it does or not is essentially an empirical question. If it doesn’t require it, then a strong longtermist shouldn’t consider a lack of knowledge creation to be a significant drawback.
Agree with almost all of this. This is why it was tricky to argue against, and also why I say (somewhere? podcast maybe?) that I’m not particularly worried about the current instantiation of longtermism, but what this kind of logic could justify.
I totally agree that most of the existential threats currently tackled by the EA community are real problems (nuclear threats, pandemics, climate change, etc).
Yeah—but I found this puzzling. You don’t need longtermism to think this is a priority—so why adopt it? If you instead adopt a problem/knowledge focused ethics, then you get to keep all the good aspects of longtermism (promoting progress, etc), but don’t open yourself up to what (in my view) are its drawbacks. I try to say this in the “Antithesis of Moral Progress” section, but obviously did a terrible job haha :)
Maybe (just maybe) we’re getting somewhere here. I have no interest in adopting a ‘problem/knowledge focused ethic’. That would seem to presuppose the intrinsic value of knowledge. I only think knowledge is instrumentally valuable insofar as it promotes welfare.
Instead most EAs want to adopt an ethic that prioritises ‘maximising welfare over the long-run’. Longtermism claims that the best way to do so is to actually focus on long-term effects, which may or may not require a focus on near-term knowledge creation—whether it does or not is essentially an empirical question. If it doesn’t require it, then a strong longtermist shouldn’t consider a lack of knowledge creation to be a significant drawback.