Hey there, thank you both for the helpful comments.
I agree the shorttermist/âlongtermist framing shouldnât be understood as too deep a divide or too reductive a category, but I think it serves a decent purpose for making clear a distinction between different foci in EA (e.g. Global Health/âFactory Farming vs AI-Risk/âBiosecurity etc).
The comment above really helped me in seeing how prioritization decisions are made. Thank you for that, Ardenlk!
Iâm a bit less bullish than Vasco on it being good that 80k does their own prioritization work. I donât think it is bad per se, but I am not sure what is gained by 80k research on the topic vis a vis other EA people trying to figure out prioritization. I do worry that what is lost are advocates/ârecomendations for causes that are not currently well-represented in the opinion of the research team, but that are well-represented among other EAâs more broadly. This makes people like me have a harder time funneling folks to EA-principles based career-advising, as Iâd be worried the advice they receive would not be representative of the considerations of EA folks, broadly construed. Again, I realize I may be overly worried here, and Iâd be happy to be corrected!
I read the Thorstadt critique as somewhat stronger than the summary you give- certainly, just invoking X-risk should not per default justify assuming astronomical value. But my sense from the two examples (one from Bostrom, one on cost-effectiveness on Biorisk) was that more plausible modeling assumptions seriously undercut at least some current cost-effectiveness models in that space, particularly for individual interventions (as opposed to e.g. systemic interventions that plausibly reduce risk long-term). I did not take it to imply that risk-reduction is not a worthwhile cause, but that current models seem to arrive at the dominance of it as a cause based on implausible assumptions (e.g. about background risk).
I think my perception of 80k as âpartisanâ stems from posts such as these, as well as the deprioritization of global health/âanimal welfare reflected on the website. If I read the post right, the four positive examples are all on longtermist causes, including one person who shifted from global health to longtermist causes after interacting with 80k. I donât mean to suggest that in any of these cases, that should not have been doneâI merely notice that the only appearance of global health or animal welfare is in that one example of someone who seems to have been moved away from those causes to a longtermist cause.
I may be reading too much into this. If you have any data (or even guesses) on how many % of people you advise you end up funneling to global health and animal welfare causes, and how many you advise to go into risk-reduction broadly construed, that would be really helpful.
Hey there, thank you both for the helpful comments.
I agree the shorttermist/âlongtermist framing shouldnât be understood as too deep a divide or too reductive a category, but I think it serves a decent purpose for making clear a distinction between different foci in EA (e.g. Global Health/âFactory Farming vs AI-Risk/âBiosecurity etc).
The comment above really helped me in seeing how prioritization decisions are made. Thank you for that, Ardenlk!
Iâm a bit less bullish than Vasco on it being good that 80k does their own prioritization work. I donât think it is bad per se, but I am not sure what is gained by 80k research on the topic vis a vis other EA people trying to figure out prioritization. I do worry that what is lost are advocates/ârecomendations for causes that are not currently well-represented in the opinion of the research team, but that are well-represented among other EAâs more broadly. This makes people like me have a harder time funneling folks to EA-principles based career-advising, as Iâd be worried the advice they receive would not be representative of the considerations of EA folks, broadly construed. Again, I realize I may be overly worried here, and Iâd be happy to be corrected!
I read the Thorstadt critique as somewhat stronger than the summary you give- certainly, just invoking X-risk should not per default justify assuming astronomical value. But my sense from the two examples (one from Bostrom, one on cost-effectiveness on Biorisk) was that more plausible modeling assumptions seriously undercut at least some current cost-effectiveness models in that space, particularly for individual interventions (as opposed to e.g. systemic interventions that plausibly reduce risk long-term). I did not take it to imply that risk-reduction is not a worthwhile cause, but that current models seem to arrive at the dominance of it as a cause based on implausible assumptions (e.g. about background risk).
I think my perception of 80k as âpartisanâ stems from posts such as these, as well as the deprioritization of global health/âanimal welfare reflected on the website. If I read the post right, the four positive examples are all on longtermist causes, including one person who shifted from global health to longtermist causes after interacting with 80k. I donât mean to suggest that in any of these cases, that should not have been doneâI merely notice that the only appearance of global health or animal welfare is in that one example of someone who seems to have been moved away from those causes to a longtermist cause.
I may be reading too much into this. If you have any data (or even guesses) on how many % of people you advise you end up funneling to global health and animal welfare causes, and how many you advise to go into risk-reduction broadly construed, that would be really helpful.