I would also be interested in whether they take into account recent discussions/criticisms of model choices in longtermist math that strike me as especially important for the kind of advising 80.000 hours does (tldr: I take one crux of that article to be that longtermist benefits by individual action are often overstated, because the great benefits longtermism advertises require both reducing risk and keeping overall risk down long-term, which plausibly exceeds the scope of a career/life).
We did discuss this internally in slack (prompted by David’s podcatst https://critiquesofea.podbean.com/e/astronomical-value-existential-risk-and-billionaires-with-david-thorstad/). My take was that the arguments don’t mean that reducing existential risk isn’t very valuable, even though they do imply it’s likely not of ‘astronomical’ value. So e.g. it’s not as if you can ignore all other considerations and treat “whether this will reduce existential risk” as a full substitute for whether something is a top priority. I agree with that.
We do generally agree that many questions in global priorities research remain open — that’s why we recommend some of our readers pursue careers in this area. We’re open to the possibility that new developments in this field could substantially change our views.
I think there would be considerable value in having the biggest career-advising organization (80k) be a non-partisan EA advising organization, whereas I currently take them to be strongly favoring longtermism in their advice. While I feel this explicit stance is a mistake, I feel like getting a better grasp on its motivation would help me understand why it was taken.
We’re not trying to be ‘partisan’, for what it’s worth. There might be a temptation to sometimes see longtermism and neartermism as different camps, but what we’re trying to do is just figure out all things considered what we think is most pressing / promising and communicate that to readers. We tend to think that propensity to affect the long-run future is a key way in which an issue can be extremely pressing (which we explain in our longtermism article.)
Hey there, thank you both for the helpful comments.
I agree the shorttermist/longtermist framing shouldn’t be understood as too deep a divide or too reductive a category, but I think it serves a decent purpose for making clear a distinction between different foci in EA (e.g. Global Health/Factory Farming vs AI-Risk/Biosecurity etc).
The comment above really helped me in seeing how prioritization decisions are made. Thank you for that, Ardenlk!
I’m a bit less bullish than Vasco on it being good that 80k does their own prioritization work. I don’t think it is bad per se, but I am not sure what is gained by 80k research on the topic vis a vis other EA people trying to figure out prioritization. I do worry that what is lost are advocates/recomendations for causes that are not currently well-represented in the opinion of the research team, but that are well-represented among other EA’s more broadly. This makes people like me have a harder time funneling folks to EA-principles based career-advising, as I’d be worried the advice they receive would not be representative of the considerations of EA folks, broadly construed. Again, I realize I may be overly worried here, and I’d be happy to be corrected!
I read the Thorstadt critique as somewhat stronger than the summary you give- certainly, just invoking X-risk should not per default justify assuming astronomical value. But my sense from the two examples (one from Bostrom, one on cost-effectiveness on Biorisk) was that more plausible modeling assumptions seriously undercut at least some current cost-effectiveness models in that space, particularly for individual interventions (as opposed to e.g. systemic interventions that plausibly reduce risk long-term). I did not take it to imply that risk-reduction is not a worthwhile cause, but that current models seem to arrive at the dominance of it as a cause based on implausible assumptions (e.g. about background risk).
I think my perception of 80k as “partisan” stems from posts such as these, as well as the deprioritization of global health/animal welfare reflected on the website. If I read the post right, the four positive examples are all on longtermist causes, including one person who shifted from global health to longtermist causes after interacting with 80k. I don’t mean to suggest that in any of these cases, that should not have been done—I merely notice that the only appearance of global health or animal welfare is in that one example of someone who seems to have been moved away from those causes to a longtermist cause.
I may be reading too much into this. If you have any data (or even guesses) on how many % of people you advise you end up funneling to global health and animal welfare causes, and how many you advise to go into risk-reduction broadly construed, that would be really helpful.
The comment above hopefully helps address this.
We did discuss this internally in slack (prompted by David’s podcatst https://critiquesofea.podbean.com/e/astronomical-value-existential-risk-and-billionaires-with-david-thorstad/). My take was that the arguments don’t mean that reducing existential risk isn’t very valuable, even though they do imply it’s likely not of ‘astronomical’ value. So e.g. it’s not as if you can ignore all other considerations and treat “whether this will reduce existential risk” as a full substitute for whether something is a top priority. I agree with that.
We do generally agree that many questions in global priorities research remain open — that’s why we recommend some of our readers pursue careers in this area. We’re open to the possibility that new developments in this field could substantially change our views.
We’re not trying to be ‘partisan’, for what it’s worth. There might be a temptation to sometimes see longtermism and neartermism as different camps, but what we’re trying to do is just figure out all things considered what we think is most pressing / promising and communicate that to readers. We tend to think that propensity to affect the long-run future is a key way in which an issue can be extremely pressing (which we explain in our longtermism article.)
Hey there, thank you both for the helpful comments.
I agree the shorttermist/longtermist framing shouldn’t be understood as too deep a divide or too reductive a category, but I think it serves a decent purpose for making clear a distinction between different foci in EA (e.g. Global Health/Factory Farming vs AI-Risk/Biosecurity etc).
The comment above really helped me in seeing how prioritization decisions are made. Thank you for that, Ardenlk!
I’m a bit less bullish than Vasco on it being good that 80k does their own prioritization work. I don’t think it is bad per se, but I am not sure what is gained by 80k research on the topic vis a vis other EA people trying to figure out prioritization. I do worry that what is lost are advocates/recomendations for causes that are not currently well-represented in the opinion of the research team, but that are well-represented among other EA’s more broadly. This makes people like me have a harder time funneling folks to EA-principles based career-advising, as I’d be worried the advice they receive would not be representative of the considerations of EA folks, broadly construed. Again, I realize I may be overly worried here, and I’d be happy to be corrected!
I read the Thorstadt critique as somewhat stronger than the summary you give- certainly, just invoking X-risk should not per default justify assuming astronomical value. But my sense from the two examples (one from Bostrom, one on cost-effectiveness on Biorisk) was that more plausible modeling assumptions seriously undercut at least some current cost-effectiveness models in that space, particularly for individual interventions (as opposed to e.g. systemic interventions that plausibly reduce risk long-term). I did not take it to imply that risk-reduction is not a worthwhile cause, but that current models seem to arrive at the dominance of it as a cause based on implausible assumptions (e.g. about background risk).
I think my perception of 80k as “partisan” stems from posts such as these, as well as the deprioritization of global health/animal welfare reflected on the website. If I read the post right, the four positive examples are all on longtermist causes, including one person who shifted from global health to longtermist causes after interacting with 80k. I don’t mean to suggest that in any of these cases, that should not have been done—I merely notice that the only appearance of global health or animal welfare is in that one example of someone who seems to have been moved away from those causes to a longtermist cause.
I may be reading too much into this. If you have any data (or even guesses) on how many % of people you advise you end up funneling to global health and animal welfare causes, and how many you advise to go into risk-reduction broadly construed, that would be really helpful.