As for your arguments, I find them interesting but still feel unsure whether Iâd land on your conclusions from them. I think for me the key point is maybe something like this:
30 years certainly isnât âlongâ for EA longtermists, maybe isnât even mid-term for some.
If someone was thinking animal advocacy or MCE was best for the coming decades, but hadnât thought about the world more than 100 years out in any serious way[1], and then later they come across arguments for focusing on making the world more than 100 years out better, and they say âYeah, I still think animal advocacy and MCE is best for that!â, then thatâd indeed be suspicious convergence.
Analogously, I think many global health and development people focused on the coming decades, not just the coming few years, and if they then embraced longtermism but still thought global health and development interventions were the top longtermist priority, Iâd call that suspicious convergence.
But a key point is that that isnât an extremely strong counterargument anyway. Something can be suspicious convergence and yet still happen to be correct. And there could be cases where you look further and discover that thereâs a systematic reason why a subset of the near-ish term objectives people already cared about are actually also really key for the long-term future, such that the suspiciousness of the convergence goes away.
Another key point is that I donât have any systematic data on how many people who currently say animal advocacy or MCE stuff should be a top priority for longtermists already supported animal advocacy or MCE stuff beforehand. So maybe there isnât even much suspicious convergence anyway.
But I do think that something like that Mercy For Animals case wouldnât make the convergence non-suspicious, and I do think that that would be a weak argument against the personâs conclusion.
[1] We could roughly operationalise this as âat least spending 30 minutes in one go at some point really thinking, reading, talking about how to make the world more than 100 years from now betterâ. I donât require that people engaged with e.g. EA arguments specifically.
I think the last useful thing in this thread might be your last reply above. But I am going to share my final thoughts anyway.
I think I am still not convinced that the suspicion that animal/âMCE advocates had âsuddenly embraced longtermismâ (in the loose sense, not the EA/âphilosophical/âToby Ordian sense) is justified, even if the animal advocates I said (like the ones in MFA) havenât thought explicitly about the future beyond 100+ yrs, because they might have thought that they roughly had, maybe in a tacit assumption that what is being achieved in a few decades is going to be staying to be the norm for very long.
So using my MFA example again, I believe the exercise used 30 yrs for thinking not because they (we?) wanted to think only 30 years ahead, but that we kind of thought it might be the most realistic timeline for factory farming to disappear, maybe also that they canât tolerate the thought that they and animals have to wait longer than 30 years. Imagine that if most of the team members in that exercise think that 100 years, or 200, or 1000 is the realistic timeline instead of 30, the exercise could easily have been done for 1000 years, which âmagicallyâ (and incorrectly) refutes the suspicion of âsuddenly embracing longtermismâ. But 30 years or 1000 years it be, the argument is the same, because they are thinking the same thing: that the terminal success will stay with the world for very long.
Actually everything said before can be summarised with this simple claim: that some (many?) animal advocates tend to tacitly think that they are going to have very long term or even eternal impacts. For example, if there isnât a movement to eliminate factory farming, it will be there forever.
I think I actually have an alternative accusation toward average farmed animal advocates rather than âsuddenly embracing longtermismâ. I think their suffer from an overconfidence about the persistence and level of goodness of their perceived terminal success, which in turn might be due to lack of imagination, lack of thinking about counterfactual worlds, lack of knowledge about technologies/âhistory, or reluctance to think of the possibility of bad things happening for too much longer.
P.S. An alternative way to thinking about my counter to your counter argument is that, if whether someoneâs thinking counts as long term thinking has to fit in some already given definition, it is possible for someone who seriously think a billion yrs ahead to accuse someone who had only previously thought about only a million yrs ahead to be âsuddenly embracing longtermismâ.
But, in terms of most of the picture, I think we are already quite on the same page, probably just not on the same sentence. I probably spent too much time on something trivial.
some (many?) animal advocates tend to tacitly think that they are going to have very long term or even eternal impacts. For example, if there isnât a movement to eliminate factory farming, it will be there forever.
I think I actually have an alternative accusation toward average farmed animal advocates rather than âsuddenly embracing longtermismâ. I think their suffer from an overconfidence about the persistence and level of goodness of their perceived terminal success, which in turn might be due to lack of imagination, lack of thinking about counterfactual worlds, lack of knowledge about technologies/âhistory, or reluctance to think of the possibility of bad things happening for too much longer.
This is quite an interesting observation/âclaim. I guess this Iâve observed something kind-of similar with many non-EA people interested in reducing nuclear risks:
It seems they often do frame their work around reducing risks of extinction or permanent collapse of civilization
But they usually donât say much about precisely why this would be bad, and in particular how this cuts off all the possible value humanity could experience/âcreate in future
But really the way they seem differ from EA longtermists who are interested in reducing nuclear risk isnât the above point, but rather how they seem to too uncritically and overconfidently assume that any nuclear exchange would cause extinction and that whatever interventions theyâre advocating for would substantially reduce the risk
So this all seems to tie into a more abstract, broad question about the extent to which the EA communityâs distinctiveness comes from its moral views (or its strong commitment to actually acting on them) vs its epistemic norms, empirical views, etc.
Though the two factors obviously interrelate in many ways. For example, if one cares about the whole long-term future and is genuinely very committed to actually making a difference to that (rather than just doing things that feel virtuous in relation to that goal), that could create strong incentives to actually form accurate beliefs, not jump to conclusions, recognise reasons why some problem might not be an extremely huge deal (since those reasons could push in favour of working on another problem instead), etc.
Glad to hear you found the post interesting!
As for your arguments, I find them interesting but still feel unsure whether Iâd land on your conclusions from them. I think for me the key point is maybe something like this:
If someone was thinking animal advocacy or MCE was best for the coming decades, but hadnât thought about the world more than 100 years out in any serious way[1], and then later they come across arguments for focusing on making the world more than 100 years out better, and they say âYeah, I still think animal advocacy and MCE is best for that!â, then thatâd indeed be suspicious convergence.
Analogously, I think many global health and development people focused on the coming decades, not just the coming few years, and if they then embraced longtermism but still thought global health and development interventions were the top longtermist priority, Iâd call that suspicious convergence.
But a key point is that that isnât an extremely strong counterargument anyway. Something can be suspicious convergence and yet still happen to be correct. And there could be cases where you look further and discover that thereâs a systematic reason why a subset of the near-ish term objectives people already cared about are actually also really key for the long-term future, such that the suspiciousness of the convergence goes away.
Another key point is that I donât have any systematic data on how many people who currently say animal advocacy or MCE stuff should be a top priority for longtermists already supported animal advocacy or MCE stuff beforehand. So maybe there isnât even much suspicious convergence anyway.
But I do think that something like that Mercy For Animals case wouldnât make the convergence non-suspicious, and I do think that that would be a weak argument against the personâs conclusion.
[1] We could roughly operationalise this as âat least spending 30 minutes in one go at some point really thinking, reading, talking about how to make the world more than 100 years from now betterâ. I donât require that people engaged with e.g. EA arguments specifically.
I think the last useful thing in this thread might be your last reply above. But I am going to share my final thoughts anyway.
I think I am still not convinced that the suspicion that animal/âMCE advocates had âsuddenly embraced longtermismâ (in the loose sense, not the EA/âphilosophical/âToby Ordian sense) is justified, even if the animal advocates I said (like the ones in MFA) havenât thought explicitly about the future beyond 100+ yrs, because they might have thought that they roughly had, maybe in a tacit assumption that what is being achieved in a few decades is going to be staying to be the norm for very long.
So using my MFA example again, I believe the exercise used 30 yrs for thinking not because they (we?) wanted to think only 30 years ahead, but that we kind of thought it might be the most realistic timeline for factory farming to disappear, maybe also that they canât tolerate the thought that they and animals have to wait longer than 30 years. Imagine that if most of the team members in that exercise think that 100 years, or 200, or 1000 is the realistic timeline instead of 30, the exercise could easily have been done for 1000 years, which âmagicallyâ (and incorrectly) refutes the suspicion of âsuddenly embracing longtermismâ. But 30 years or 1000 years it be, the argument is the same, because they are thinking the same thing: that the terminal success will stay with the world for very long.
Actually everything said before can be summarised with this simple claim: that some (many?) animal advocates tend to tacitly think that they are going to have very long term or even eternal impacts. For example, if there isnât a movement to eliminate factory farming, it will be there forever.
I think I actually have an alternative accusation toward average farmed animal advocates rather than âsuddenly embracing longtermismâ. I think their suffer from an overconfidence about the persistence and level of goodness of their perceived terminal success, which in turn might be due to lack of imagination, lack of thinking about counterfactual worlds, lack of knowledge about technologies/âhistory, or reluctance to think of the possibility of bad things happening for too much longer.
P.S. An alternative way to thinking about my counter to your counter argument is that, if whether someoneâs thinking counts as long term thinking has to fit in some already given definition, it is possible for someone who seriously think a billion yrs ahead to accuse someone who had only previously thought about only a million yrs ahead to be âsuddenly embracing longtermismâ.
But, in terms of most of the picture, I think we are already quite on the same page, probably just not on the same sentence. I probably spent too much time on something trivial.
This is quite an interesting observation/âclaim. I guess this Iâve observed something kind-of similar with many non-EA people interested in reducing nuclear risks:
It seems they often do frame their work around reducing risks of extinction or permanent collapse of civilization
But they usually donât say much about precisely why this would be bad, and in particular how this cuts off all the possible value humanity could experience/âcreate in future
But really the way they seem differ from EA longtermists who are interested in reducing nuclear risk isnât the above point, but rather how they seem to too uncritically and overconfidently assume that any nuclear exchange would cause extinction and that whatever interventions theyâre advocating for would substantially reduce the risk
So this all seems to tie into a more abstract, broad question about the extent to which the EA communityâs distinctiveness comes from its moral views (or its strong commitment to actually acting on them) vs its epistemic norms, empirical views, etc.
Though the two factors obviously interrelate in many ways. For example, if one cares about the whole long-term future and is genuinely very committed to actually making a difference to that (rather than just doing things that feel virtuous in relation to that goal), that could create strong incentives to actually form accurate beliefs, not jump to conclusions, recognise reasons why some problem might not be an extremely huge deal (since those reasons could push in favour of working on another problem instead), etc.