A lot of the prioritization work, even of, “Let’s just estimate a lot of things to get expected values.”
I would like to see more of this, and I would also like to see people be less uniformly critical of this sort of work. I’ve written a few things like this, and I inevitably get a few comments along the lines of, “This estimate isn’t actually accurate, you can’t know the true expected value, this research is a waste of time.” IME I get much more strongly negative comments when I write anything quantitative than when I don’t. But I might just be noticing that type of criticism more than other types.
Much better EA funding infrastructure, in part for long-term funding.
The rate of institutional value drift is something like 0.5%. Halving this would be extremely beneficial for anyone who wants to invest their money for future generations. It seems likely that if we put more effort into designing stable institutions, we could create EA investment funds that last for much longer.
The rate of individual value drift is even higher, something around 5%. That’s really bad. Is there anything we can do about it? Is bringing new people into the movement more important than improving retention?
Some other neglected problems (with some shameless references to my own writings):
I like GPI’s research agenda. Right now there are only about half a dozen people working on these problems.
What is the correct “philosophy of priors”? The choice of prior distribution heavily affects how we should behave in areas of high uncertainty. For example, see Will MacAskill’s post and the Toby Ord’s reply. (edit: see also this relevant post)
With a simple model, I calculated that improving our estimate of the discount rate could matter more than any particular cause. The rationale is that the we should spent our resources at some optimal rate, which is largely determined by the philanthropic discount rate. Moving our spending schedule slightly closer to the optimal rate substantially increases expected utility. This is just based on a simple model, but I’d like to see more work on this.
In the conclusion of the same essay, I gave a list of relevant ideas for potential top causes with my rough guesses on their importance/neglectedness/tractability. The ideas not mentioned so far are: improving the ability of individuals to delegate their income to value-stable institutions; and making expropriation and value drift less threatening by spreading altruistic funds more evenly across actors and countries.
IMO there are some relatively straightforward ways that EAs could invest better, which I wrote about here. Improving EAs’ investments could be pretty valuable, especially for “give later”-leaning EAs.
Reducing the long-term probability of extinction, rather than just the probability over the next few decades. (I’m currently writing something about this.)
If you accept that improving the long-term value of the future is more important than reducing x-risk, is there anything you should do now, or should you mainly invest to give later? Does movement building count as investing? What about cause prioritization research? When is it better to work on movement building/cause prioritization rather than simply investing your money in financial assets?
IME I get much more strongly negative comments when I write anything quantitative than when I don’t. But I might just be noticing that type of criticism more than other types.
I haven’t seen these specific examples, but there definitely seems to be a similar bias in other groups. Many organizations are afraid to make any kinds of estimates at all. At the extreme end are people who don’t even make clear statements, they just speak in vague metaphors or business jargon that are easy to defend but don’t actually convey any information. Needless to say, I think this is an anti-pattern. I’d be curious if anyone reading this would argue.
The rate of individual value drift is even higher, something around 5%. That’s really bad. Is there anything we can do about it? Is bringing new people into the movement more important than improving retention?
It seems to me like some modeling here would be highly useful, though it can get kind of awkward. I imagine many decent attempts would include numbers like, “total expected benefit of one member”. Our culture often finds some of these calculations too “cold and calculating.” It could be worth it for someone to do a decent job at some of this, and just publicly write up the main takeaways.
I find the ideas you presented quite interesting and reasonable, I’d love to see more work along those lines.
I’d be curious if anyone reading this would argue.
I think it would depend a lot on how we operationalise the stance you’re arguing in favour of.
Overall, at the margin, I’m in favour of:
less use of vague-yet-defensible language
EAs/people in general making and using more explicit, quantitative estimates (including probability estimates)
(I’m in favour of these things both in general and when it comes to cause priorisation work.)
But I’m somewhat tentative/moderate in those views. For the sake of conversation, I’ll skip stating the arguments in favour of those views, and just focus on the arguments against (or the arguments for tentativenesss/moderation).
Essentially, as I outlined in this post (which I know you already read and left useful comments on), I think making, using, and making public quantitative estimates might sometimes:
Cost more time and effort than alternative approaches (such as more qualitative, “all-things-considered” assessments/discussions)
Exclude some of the estimators’ knowledge (which could’ve been leveraged by alternative approaches)
Cause overconfidence and/or cause underestimations of the value of information
Succumb to the optimizer’s curse
Cause anchoring
Cause reputational issues
(These downsides won’t always occur, can sometimes occur more strongly if we use approaches other than quantitative estimates, and can be outweighed by the benefits of quantitative estimates. But here I’m just focusing on “arguments against”.)
As a result:
I don’t think we should always aim for or require quantitative estimates (including in cause prioritisation work)
I think it may often be wise to combine use of quantitative estimates, formal models, etc. with more intuitive / all-things-considered / “black-box” approaches (see also)
I definitely think some statements/work from EAs and rationalists have used quantitative estimates in an overconfident way (sometimes wildly so), and/or has been treated by others as more certain than it is
It’s plausible to me that this overconfidence problem has not merely co-occurred or correlated with use of quantitative estimates, but that it tends to be exacerbated by that
But I’m not at all certain of that. Using quantitative estimates can sometimes help us see our uncertainty, critique people’s stances, have reality clearly prove us wrong (well, poorly calibrated), etc.
Relatedly, I think people using quantitative estimates should be very careful to remember how uncertain they are and communicate this clearly
But I’d say the same for most qualitative work in domains like longtermism
It’s plausible to me that the anchoring and/or reputational issues of making one’s quantitative estimates public outweigh the benefits of doing so (relative to just making more qualitative conclusions and considerations public)
But I’m not at all certain of that (as demonstrated by me making this database)
And I think this’ll depend a lot on how well thought-out one’s estimates are, how well one can communicate uncertainty, what one’s target audiences are, etc.
And it could still be worth making the estimates and not communicating them, or communicating them less publicly
I don’t think this position strongly contrasts with your or Michael’s positions. And indeed I’m a fan of what I’ve seen of both your work, and overall I favour more work like that. But these do seem like nuances/caveats worth noting.
I’m not advocating for “poorly done quantitative estimates.” I think anyone reasonable would admit that it’s possible to bungle them.
I’m definitely not happy with a local optimum of “not having estimates”. It’s possible that “having a few estimates” can be worse, but I imagine we’ll want to get to the point of “having lots of estimates, and becoming more mature to be able to handle them.” at some point, so that’s the direction to aim for.
I think the “local vs global optima” framing is an interesting way of looking at it.
That reminds me of some of my thinking when I was trying to work out whether it’d be net positive to make that database of existential risk estimates (vs it being net negative due to anchoring, reputational issues to EA/longtermists, etc.). In particular, a big part of my reasoning was something like:
It’s plausible that it’s worse for this database to exist than for there to be no public existential risk estimates. But what really matters is whether it’s better that this database exist than that there be a small handful of existential risk estimates, scattered in various different places, and with people often referring to only one set in a given instance (e.g., the 2008 FHI survey), sometimes as if it’s the ‘final word’ on the matter.
That situation seems probably even worse from an anchoring and reputational perspective than there being a database. This is because seeing a larger set of estimates side by side could help people see how much disagreement there is and thus have a more appropriate level of uncertainty and humility.
With your comment in mind, I’d now add:
But all of that is just about how good various different present-day situations would be. We should also consider what position we ultimately want to reach.
It seems plausible that we could end up with a larger set of more trustworthy and more independently-made existential risk estimates. And it seems likely that this would be better than the situation we’re in now.
Furthermore, it seems plausible that making this database moves us a step towards that destination. This could be a reason to make the database, even if doing so was slightly counterproductive in the short term.
I haven’t seen these specific examples, but there definitely seems to be a similar bias in other groups. Many organizations are afraid to make any kinds of estimates at all...
Reminds me of the thing where corporations don’t want to implement internal prediction markets because implementing a market isn’t in the self-interest of any individual decision-maker.
I imagine many decent attempts would include numbers like, “total expected benefit of one member”. Our culture often finds some of these calculations too “cold and calculating.”
I think this is a good point. A three-factor model of community building comes to mind as a prior post that had to tackle and communicate about this sort of tricky thing, and that did a good job of that, in my opinion. That post might be useful reading for other people who have to tackle and communicate about this sort of tricky issue in future. (E.g., I quoted it in a recent post of mine.)
The most relevant parts of that post are the section on “Elitism vs. egalitarianism”, and the following paragraph:
[Variation in the factors this post focuses on] often rests on things outside of people’s control. Luck, life circumstance, and existing skills may make a big difference to how much someone can offer, so that even people who care very much can end up having very different impacts. This is uncomfortable, because it pushes against egalitarian norms that we value. [...] We also do not think that these ideas should be used to devalue or dismiss certain people, or that they should be used to idolize others. The reason we are considering this question is to help us understand how we should prioritize our resources in carrying out our programs, not to judge people.
It seems to me like some modeling here would be highly useful
The basic model is really easy. Total number of community members at time t is e(r−v)t, where r is the movement growth rate and v is the value drift rate. So if the value of the EA community is proportional to the number of members, then increasing r by some number of percentage points is exactly as good as decreasing v by the same amount.
It’s less obvious how to model the tractability of changing r and v.
If you accept that improving the long-term value of the future is more important than reducing x-risk
Do you mean “If you accept that improving the long-term value of the future is more important than reducing extinction risk” (as distinct from existential risk more broadly, which already includes other ways of improving the value of the future)?
Or “If you accept that improving the long-term value of the future is more important than reducing the risk of existential catastrophe in the relatively near future?”
I meant to distinguish between long-term efforts and reducing x-risk in the relatively near future (the second case on your list), sorry that was unclear.
This is a really good comment.
I would like to see more of this, and I would also like to see people be less uniformly critical of this sort of work. I’ve written a few things like this, and I inevitably get a few comments along the lines of, “This estimate isn’t actually accurate, you can’t know the true expected value, this research is a waste of time.” IME I get much more strongly negative comments when I write anything quantitative than when I don’t. But I might just be noticing that type of criticism more than other types.
The rate of institutional value drift is something like 0.5%. Halving this would be extremely beneficial for anyone who wants to invest their money for future generations. It seems likely that if we put more effort into designing stable institutions, we could create EA investment funds that last for much longer.
The rate of individual value drift is even higher, something around 5%. That’s really bad. Is there anything we can do about it? Is bringing new people into the movement more important than improving retention?
Some other neglected problems (with some shameless references to my own writings):
I like GPI’s research agenda. Right now there are only about half a dozen people working on these problems.
What is the correct “philosophy of priors”? The choice of prior distribution heavily affects how we should behave in areas of high uncertainty. For example, see Will MacAskill’s post and the Toby Ord’s reply. (edit: see also this relevant post)
With a simple model, I calculated that improving our estimate of the discount rate could matter more than any particular cause. The rationale is that the we should spent our resources at some optimal rate, which is largely determined by the philanthropic discount rate. Moving our spending schedule slightly closer to the optimal rate substantially increases expected utility. This is just based on a simple model, but I’d like to see more work on this.
In the conclusion of the same essay, I gave a list of relevant ideas for potential top causes with my rough guesses on their importance/neglectedness/tractability. The ideas not mentioned so far are: improving the ability of individuals to delegate their income to value-stable institutions; and making expropriation and value drift less threatening by spreading altruistic funds more evenly across actors and countries.
IMO there are some relatively straightforward ways that EAs could invest better, which I wrote about here. Improving EAs’ investments could be pretty valuable, especially for “give later”-leaning EAs.
Reducing the long-term probability of extinction, rather than just the probability over the next few decades. (I’m currently writing something about this.)
If you accept that improving the long-term value of the future is more important than reducing x-risk, is there anything you should do now, or should you mainly invest to give later? Does movement building count as investing? What about cause prioritization research? When is it better to work on movement building/cause prioritization rather than simply investing your money in financial assets?
I haven’t seen these specific examples, but there definitely seems to be a similar bias in other groups. Many organizations are afraid to make any kinds of estimates at all. At the extreme end are people who don’t even make clear statements, they just speak in vague metaphors or business jargon that are easy to defend but don’t actually convey any information. Needless to say, I think this is an anti-pattern. I’d be curious if anyone reading this would argue.
It seems to me like some modeling here would be highly useful, though it can get kind of awkward. I imagine many decent attempts would include numbers like, “total expected benefit of one member”. Our culture often finds some of these calculations too “cold and calculating.” It could be worth it for someone to do a decent job at some of this, and just publicly write up the main takeaways.
I find the ideas you presented quite interesting and reasonable, I’d love to see more work along those lines.
I think it would depend a lot on how we operationalise the stance you’re arguing in favour of.
Overall, at the margin, I’m in favour of:
less use of vague-yet-defensible language
EAs/people in general making and using more explicit, quantitative estimates (including probability estimates)
(I’m in favour of these things both in general and when it comes to cause priorisation work.)
But I’m somewhat tentative/moderate in those views. For the sake of conversation, I’ll skip stating the arguments in favour of those views, and just focus on the arguments against (or the arguments for tentativenesss/moderation).
Essentially, as I outlined in this post (which I know you already read and left useful comments on), I think making, using, and making public quantitative estimates might sometimes:
Cost more time and effort than alternative approaches (such as more qualitative, “all-things-considered” assessments/discussions)
Exclude some of the estimators’ knowledge (which could’ve been leveraged by alternative approaches)
Cause overconfidence and/or cause underestimations of the value of information
Succumb to the optimizer’s curse
Cause anchoring
Cause reputational issues
(These downsides won’t always occur, can sometimes occur more strongly if we use approaches other than quantitative estimates, and can be outweighed by the benefits of quantitative estimates. But here I’m just focusing on “arguments against”.)
As a result:
I don’t think we should always aim for or require quantitative estimates (including in cause prioritisation work)
I think it may often be wise to combine use of quantitative estimates, formal models, etc. with more intuitive / all-things-considered / “black-box” approaches (see also)
I definitely think some statements/work from EAs and rationalists have used quantitative estimates in an overconfident way (sometimes wildly so), and/or has been treated by others as more certain than it is
It’s plausible to me that this overconfidence problem has not merely co-occurred or correlated with use of quantitative estimates, but that it tends to be exacerbated by that
But I’m not at all certain of that. Using quantitative estimates can sometimes help us see our uncertainty, critique people’s stances, have reality clearly prove us wrong (well, poorly calibrated), etc.
Relatedly, I think people using quantitative estimates should be very careful to remember how uncertain they are and communicate this clearly
But I’d say the same for most qualitative work in domains like longtermism
It’s plausible to me that the anchoring and/or reputational issues of making one’s quantitative estimates public outweigh the benefits of doing so (relative to just making more qualitative conclusions and considerations public)
But I’m not at all certain of that (as demonstrated by me making this database)
And I think this’ll depend a lot on how well thought-out one’s estimates are, how well one can communicate uncertainty, what one’s target audiences are, etc.
And it could still be worth making the estimates and not communicating them, or communicating them less publicly
I don’t think this position strongly contrasts with your or Michael’s positions. And indeed I’m a fan of what I’ve seen of both your work, and overall I favour more work like that. But these do seem like nuances/caveats worth noting.
Nice post. I think I agree with all of that.
I’m not advocating for “poorly done quantitative estimates.” I think anyone reasonable would admit that it’s possible to bungle them.
I’m definitely not happy with a local optimum of “not having estimates”. It’s possible that “having a few estimates” can be worse, but I imagine we’ll want to get to the point of “having lots of estimates, and becoming more mature to be able to handle them.” at some point, so that’s the direction to aim for.
I think the “local vs global optima” framing is an interesting way of looking at it.
That reminds me of some of my thinking when I was trying to work out whether it’d be net positive to make that database of existential risk estimates (vs it being net negative due to anchoring, reputational issues to EA/longtermists, etc.). In particular, a big part of my reasoning was something like:
With your comment in mind, I’d now add:
Reminds me of the thing where corporations don’t want to implement internal prediction markets because implementing a market isn’t in the self-interest of any individual decision-maker.
Yea, I think there are similar incentives at play in both cases
I think this is a good point. A three-factor model of community building comes to mind as a prior post that had to tackle and communicate about this sort of tricky thing, and that did a good job of that, in my opinion. That post might be useful reading for other people who have to tackle and communicate about this sort of tricky issue in future. (E.g., I quoted it in a recent post of mine.)
The most relevant parts of that post are the section on “Elitism vs. egalitarianism”, and the following paragraph:
Thanks!
The basic model is really easy. Total number of community members at time
t
is e(r−v)t, wherer
is the movement growth rate andv
is the value drift rate. So if the value of the EA community is proportional to the number of members, then increasingr
by some number of percentage points is exactly as good as decreasingv
by the same amount.It’s less obvious how to model the tractability of changing
r
andv
.I liked this comment.
Do you mean “If you accept that improving the long-term value of the future is more important than reducing extinction risk” (as distinct from existential risk more broadly, which already includes other ways of improving the value of the future)?
Or “If you accept that improving the long-term value of the future is more important than reducing the risk of existential catastrophe in the relatively near future?”
Or something else (e.g., about smaller trajectory changes)?
I meant to distinguish between long-term efforts and reducing x-risk in the relatively near future (the second case on your list), sorry that was unclear.