some (many?) animal advocates tend to tacitly think that they are going to have very long term or even eternal impacts. For example, if there isn’t a movement to eliminate factory farming, it will be there forever.
I think I actually have an alternative accusation toward average farmed animal advocates rather than “suddenly embracing longtermism”. I think their suffer from an overconfidence about the persistence and level of goodness of their perceived terminal success, which in turn might be due to lack of imagination, lack of thinking about counterfactual worlds, lack of knowledge about technologies/history, or reluctance to think of the possibility of bad things happening for too much longer.
This is quite an interesting observation/claim. I guess this I’ve observed something kind-of similar with many non-EA people interested in reducing nuclear risks:
It seems they often do frame their work around reducing risks of extinction or permanent collapse of civilization
But they usually don’t say much about precisely why this would be bad, and in particular how this cuts off all the possible value humanity could experience/create in future
But really the way they seem differ from EA longtermists who are interested in reducing nuclear risk isn’t the above point, but rather how they seem to too uncritically and overconfidently assume that any nuclear exchange would cause extinction and that whatever interventions they’re advocating for would substantially reduce the risk
So this all seems to tie into a more abstract, broad question about the extent to which the EA community’s distinctiveness comes from its moral views (or its strong commitment to actually acting on them) vs its epistemic norms, empirical views, etc.
Though the two factors obviously interrelate in many ways. For example, if one cares about the whole long-term future and is genuinely very committed to actually making a difference to that (rather than just doing things that feel virtuous in relation to that goal), that could create strong incentives to actually form accurate beliefs, not jump to conclusions, recognise reasons why some problem might not be an extremely huge deal (since those reasons could push in favour of working on another problem instead), etc.
This is quite an interesting observation/claim. I guess this I’ve observed something kind-of similar with many non-EA people interested in reducing nuclear risks:
It seems they often do frame their work around reducing risks of extinction or permanent collapse of civilization
But they usually don’t say much about precisely why this would be bad, and in particular how this cuts off all the possible value humanity could experience/create in future
But really the way they seem differ from EA longtermists who are interested in reducing nuclear risk isn’t the above point, but rather how they seem to too uncritically and overconfidently assume that any nuclear exchange would cause extinction and that whatever interventions they’re advocating for would substantially reduce the risk
So this all seems to tie into a more abstract, broad question about the extent to which the EA community’s distinctiveness comes from its moral views (or its strong commitment to actually acting on them) vs its epistemic norms, empirical views, etc.
Though the two factors obviously interrelate in many ways. For example, if one cares about the whole long-term future and is genuinely very committed to actually making a difference to that (rather than just doing things that feel virtuous in relation to that goal), that could create strong incentives to actually form accurate beliefs, not jump to conclusions, recognise reasons why some problem might not be an extremely huge deal (since those reasons could push in favour of working on another problem instead), etc.