It is of course a feature of trying to prioritise between causes in order to do the most good, that some groups will be effectively ignored.
Luckily in this case if done in a sensible manner I would expect that there should be a strong correlation between short term welfare and long-run welfare. As managing high uncertainty should involve some amount of ensuring good feedback loops and iterating, so taking action changing things for the better (for the long run but in a way that affects the world now) learning and improving. Building the EA community, developing clean meat, improving policy making, etc.
(Unfortunately I am not sure to what extent this is a key part of the EA longtermist paradigm at present.)
Yeah that is a good way of putting it. Thank you.
It is of course a feature of trying to prioritise between causes in order to do the most good, that some groups will be effectively ignored.
Luckily in this case if done in a sensible manner I would expect that there should be a strong correlation between short term welfare and long-run welfare. As managing high uncertainty should involve some amount of ensuring good feedback loops and iterating, so taking action changing things for the better (for the long run but in a way that affects the world now) learning and improving. Building the EA community, developing clean meat, improving policy making, etc.
(Unfortunately I am not sure to what extent this is a key part of the EA longtermist paradigm at present.)