If you instead adopt a problem/knowledge focused ethics, then you get to keep all the good aspects of longtermism (promoting progress, etc), but don’t open yourself up to what (in my view) are its drawbacks
Maybe (just maybe) we’re getting somewhere here. I have no interest in adopting a ‘problem/knowledge focused ethic’. That would seem to presuppose the intrinsic value of knowledge. I only think knowledge is instrumentally valuable insofar as it promotes welfare.
Instead most EAs want to adopt an ethic that prioritises ‘maximising welfare over the long-run’. Longtermism claims that the best way to do so is to actually focus on long-term effects, which may or may not require a focus on near-term knowledge creation—whether it does or not is essentially an empirical question. If it doesn’t require it, then a strong longtermist shouldn’t consider a lack of knowledge creation to be a significant drawback.
Maybe (just maybe) we’re getting somewhere here. I have no interest in adopting a ‘problem/knowledge focused ethic’. That would seem to presuppose the intrinsic value of knowledge. I only think knowledge is instrumentally valuable insofar as it promotes welfare.
Instead most EAs want to adopt an ethic that prioritises ‘maximising welfare over the long-run’. Longtermism claims that the best way to do so is to actually focus on long-term effects, which may or may not require a focus on near-term knowledge creation—whether it does or not is essentially an empirical question. If it doesn’t require it, then a strong longtermist shouldn’t consider a lack of knowledge creation to be a significant drawback.