Longtermism is probably not really worth it if the far future contains much more suffering than happiness
Longtermism isn’t synonymous with making sure more sentient beings exist in the far future. That’s one subset, which is popular in EA, but an important alternative is that you could work to reduce the suffering of beings in the far future.
Before caring about longtermism, we should probably care more about making the world a place where humans are not causing more suffering than happiness (so no factory farming)
No, I’d argue longtermism merits significant attention right now. Just that factory farming also merits significant attention.
I agree with you that protecting the future (eg mitigating existential risks) needs to be accompanied by trying to ensure that the future is net positive rather than negative. But one argument I find pretty persuasive is, even if the present was hugely net negative, our power as a species is so great and still increasing (esp if you include AI), that it’s quite plausible that in the future we could turn that balance positive—and, the future being such a big place, that could outweigh all present and near-term negativity. Obviously there are big question marks here but the increasing power trend at least is convincing, and relevant.
Longtermism isn’t synonymous with making sure more sentient beings exist in the far future. That’s one subset, which is popular in EA, but an important alternative is that you could work to reduce the suffering of beings in the far future.
Oh yeah, I just remembered that moral circle expansion was part of longtermism, that’s true.
It’s just that I mosty hear about longtermism when it comes to existential risk reduction—my point above was more about that topic.
No, I’d argue longtermism merits significant attention right now. Just that factory farming also merits significant attention.
I agree with you that protecting the future (eg mitigating existential risks) needs to be accompanied by trying to ensure that the future is net positive rather than negative. But one argument I find pretty persuasive is, even if the present was hugely net negative, our power as a species is so great and still increasing (esp if you include AI), that it’s quite plausible that in the future we could turn that balance positive—and, the future being such a big place, that could outweigh all present and near-term negativity. Obviously there are big question marks here but the increasing power trend at least is convincing, and relevant.