It is certainly possible to accuse me of taking the phrase “ignoring the effects” too literally. Perhaps longtermists wouldn’t actually ignore the present and its problems, but their concern for it would be merely instrumental. In other words, longtermists may choose to focus on current problems, but the reason to do so is out of concern for the future.
My response is that attention is zero-sum. We are either solving current pressing problems, or wildly conjecturing what the world will look like in tens, hundreds, and thousands of years. If the focus is on current problems only, then what does the “longtermism” label mean? If, on the other hand, we’re not only focused on the present, then the critique holds to whatever extent we’re guessing about future problems and ignoring current ones.
I agree that attention is a limited resource, but it feels like you’re imagining split attention leads to something like linear interpolation between focused attention on either end; in fact I think it’s much better than that, and that attention on the two parts are complementary. For example we need to wrestle with problems we face today to give us good enough feedback loops to make substantial progress, but by taking the long-term perspective we can improve our judgement about which of the nearer-term problems should be highest-priority.
I actually think that in the longtermist ideal world (where everyone is on board with longtermism) that over 90% of attention—perhaps over 99% -- would go to things that look like problems already. But that at the present margin in the actual world the longtermist perspective is underappreciated so looks particularly valuable.
I’m tempted to just concede this because we’re very close to agreement here.
For example we need to wrestle with problems we face today to give us good enough feedback loops to make substantial progress, but by taking the long-term perspective we can improve our judgement about which of the nearer-term problems should be highest-priority.
If this turns out to be true (i.e., people end up working on actual problems and not, say, defunding the AMF to worry about “AI controlled police and armies”), then I have much less of a problem with longtermism. People can use whatever method they want to decide which problems they want to work on (I’ll leave the prioritization to 80K :) ).
I actually think that in the longtermist ideal world (where everyone is on board with longtermism) that over 90% of attention—perhaps over 99% -- would go to things that look like problems already.
Just apply my critique to the x% of attention that’s spent worrying about non-problems. (Admittedly, of course, this world is better than the one where 100% of attention is on non-existent possible future problems.)
I think this is might be a case of the-devil-is-in-the-details.
I’m in favour of people scanning the horizon for major problems whose negative impacts are not yet being felt, and letting that have some significant impact on which nearer-term problems they wrestle with. I think that a large proportion of things that longtermists are working on are problems that are at least partially or potentially within our foresight horizons. It sounds like maybe you think there is current work happening which is foreseeably of little value: if so I think it could be productive to debate the details of that.
I agree that attention is a limited resource, but it feels like you’re imagining split attention leads to something like linear interpolation between focused attention on either end; in fact I think it’s much better than that, and that attention on the two parts are complementary. For example we need to wrestle with problems we face today to give us good enough feedback loops to make substantial progress, but by taking the long-term perspective we can improve our judgement about which of the nearer-term problems should be highest-priority.
I actually think that in the longtermist ideal world (where everyone is on board with longtermism) that over 90% of attention—perhaps over 99% -- would go to things that look like problems already. But that at the present margin in the actual world the longtermist perspective is underappreciated so looks particularly valuable.
I’m tempted to just concede this because we’re very close to agreement here.
If this turns out to be true (i.e., people end up working on actual problems and not, say, defunding the AMF to worry about “AI controlled police and armies”), then I have much less of a problem with longtermism. People can use whatever method they want to decide which problems they want to work on (I’ll leave the prioritization to 80K :) ).
Just apply my critique to the x% of attention that’s spent worrying about non-problems. (Admittedly, of course, this world is better than the one where 100% of attention is on non-existent possible future problems.)
I think this is might be a case of the-devil-is-in-the-details.
I’m in favour of people scanning the horizon for major problems whose negative impacts are not yet being felt, and letting that have some significant impact on which nearer-term problems they wrestle with. I think that a large proportion of things that longtermists are working on are problems that are at least partially or potentially within our foresight horizons. It sounds like maybe you think there is current work happening which is foreseeably of little value: if so I think it could be productive to debate the details of that.