(Speaking for myself, not any of my employers, as per usual)
Here are my personal, tentative takeaways after reading and thinking about this topic off and on for several months:
The case for punting and the case for doing/supporting “direct work” primarily for its “punting-like” benefits (e.g., value of information, field-building) both seem pretty strong.
The case for doing direct work primarily for its more “direct” benefits seems less strong.
If memory serves, I think:
I hadn’t thought about these matters much at all last year
Then, when I heard things like Trammell’s 80k episode, I began to feel that the arguments for punting were stronger than the arguments for doing/supporting direct work
Then, in the course of working on this post, I became more confident about the arguments for punting, and started to think that the key value of direct work might be its punting-like benefits (and that decisions about direct work—e.g., which org to donate to—should perhaps be primarily be based on those types of benefits)
I think “EA in general” had undervalued the arguments for punting until 2020. But I think that a major shift has occurred in 2020 (see e.g. the many recent posts under the Patient Altruism tag).
Our discourse may now roughly appropriately balance the case for punting and the case for “direct work now”.
It’s hard for me to comment on whether our actions strike the appropriate balance. [I edited this set of points in response to MichaelDickens’ comment below.]
I think EAs may still pay too little attention to the idea that direct work might be valuable primarily for its punting-like benefits, and that that may be the key factor to consider when making decisions about direct work
I’m quite unsure about how we should allocate resources between punting vs direct work selected for its punting-like benefits
Next year, I think I’ll give 10% of my income to “direct work” orgs/projects/people, which I’ll select primarily based on their potential punting-like benefits (e.g., mentoring early-career researchers). And I’ll invest as much as I easily can beyond that 10% (which I expect to be >10%) for giving later, once I’ve accrued interest on it and I know more.
A good counterargument to me doing that is that I may undergo value drift. To partially address that, I might use a donor advised fund.
It’s also very possible I should invest the 10% as well. A non-negligible factor in me planning to support direct work with 10% of my income is simply that I want to (rather than that I’m confident it’s morally best).
I think “EA in general” had undervalued the arguments for punting until 2020. But I think that a major shift has occurred in 2020 (see e.g. the many recent posts under the Patient Altruism tag), and we might now be at approximately the right point.
If punting is indeed the right move, then this only seems true with regard to the discourse, not with regard to people’s actual behavior. For example, Open Phil spends somewhere around 3% of its budget per year, which is too high on pure “patient longtermist” considerations—Phil Trammell’s paper suggested an optimal spend rate of ~0.5% in general, but possibly lower than that if you believe other philanthropists are spending too quickly. (Global poverty donors in particular should be giving 0% per year. This claim seems pretty robustly true.)
Edited to add: I think a rate above 0.5% can be justified based on issues with value drift/expropriation, see https://forum.effectivealtruism.org/posts/3QhcSxHTz2F7xxXdY/estimating-the-philanthropic-discount-rate. AFAIK, nobody has really put work into determining the optimal spending rate, so we don’t know what the optimal spending rate is even if we accept the arguments for urgency. My best guess based on my limited research is that the optimal urgent spending rate is something like 1.5% for institutions and 6% for individuals (based on a 0.5% annual probability of existential catastrophe, 0.5% expropriation rate, 0.5% institutional value drift rate, and 5% individual value drift rate).
Ah, good point that we should distinguish the discourse from the behaviours, and that what I said is clearer for the discourse than for the behaviours. I actually intended those sentences to just be about the discourse, but I didn’t make that clear. (I’ve now edited those sentences.)
Also, whether people’s discourse is at an appropriate point is probably less decision-relevant than whether their actions are, because:
it might be more worthwhile to try to push their actions towards the appropriate balance than to push their discourse towards the appropriate balance
we might want to oversteer one way or the other to compensate for what other people are doing (and this is somewhat less true regarding what people are saying)
Unfortunately, I find it very hard to say whether EAs’ actions are, in aggregate, overemphasising “direct work now”, overemphasising punting, or striking roughly the right balance. (Alternative terms would be “too urgent” vs “too patient” vs roughly right.) This is because I don’t have a strong sense of what balance EAs are currently striking or of what balance they should be striking. (Though I’ve found your work helpful on the latter point.)
Also, I realise now that I’m basing my assessment of EA’s discourse primarily on what I see on the forum and what I hear from the EAs I speak to, who are mostly highly engaged. This probably gives me a misleading picture, as ideas probably diffuse faster to these groups than to EAs in general.
(Speaking for myself, not any of my employers, as per usual)
Here are my personal, tentative takeaways after reading and thinking about this topic off and on for several months:
The case for punting and the case for doing/supporting “direct work” primarily for its “punting-like” benefits (e.g., value of information, field-building) both seem pretty strong.
The case for doing direct work primarily for its more “direct” benefits seems less strong.
If memory serves, I think:
I hadn’t thought about these matters much at all last year
Then, when I heard things like Trammell’s 80k episode, I began to feel that the arguments for punting were stronger than the arguments for doing/supporting direct work
Then, in the course of working on this post, I became more confident about the arguments for punting, and started to think that the key value of direct work might be its punting-like benefits (and that decisions about direct work—e.g., which org to donate to—should perhaps be primarily be based on those types of benefits)
I think “EA in general” had undervalued the arguments for punting until 2020. But I think that a major shift has occurred in 2020 (see e.g. the many recent posts under the Patient Altruism tag).
Our discourse may now roughly appropriately balance the case for punting and the case for “direct work now”.
It’s hard for me to comment on whether our actions strike the appropriate balance. [I edited this set of points in response to MichaelDickens’ comment below.]
I think EAs may still pay too little attention to the idea that direct work might be valuable primarily for its punting-like benefits, and that that may be the key factor to consider when making decisions about direct work
I’m quite unsure about how we should allocate resources between punting vs direct work selected for its punting-like benefits
Next year, I think I’ll give 10% of my income to “direct work” orgs/projects/people, which I’ll select primarily based on their potential punting-like benefits (e.g., mentoring early-career researchers). And I’ll invest as much as I easily can beyond that 10% (which I expect to be >10%) for giving later, once I’ve accrued interest on it and I know more.
A good counterargument to me doing that is that I may undergo value drift. To partially address that, I might use a donor advised fund.
It’s also very possible I should invest the 10% as well. A non-negligible factor in me planning to support direct work with 10% of my income is simply that I want to (rather than that I’m confident it’s morally best).
If punting is indeed the right move, then this only seems true with regard to the discourse, not with regard to people’s actual behavior. For example, Open Phil spends somewhere around 3% of its budget per year, which is too high on pure “patient longtermist” considerations—Phil Trammell’s paper suggested an optimal spend rate of ~0.5% in general, but possibly lower than that if you believe other philanthropists are spending too quickly. (Global poverty donors in particular should be giving 0% per year. This claim seems pretty robustly true.)
Edited to add: I think a rate above 0.5% can be justified based on issues with value drift/expropriation, see https://forum.effectivealtruism.org/posts/3QhcSxHTz2F7xxXdY/estimating-the-philanthropic-discount-rate. AFAIK, nobody has really put work into determining the optimal spending rate, so we don’t know what the optimal spending rate is even if we accept the arguments for urgency. My best guess based on my limited research is that the optimal urgent spending rate is something like 1.5% for institutions and 6% for individuals (based on a 0.5% annual probability of existential catastrophe, 0.5% expropriation rate, 0.5% institutional value drift rate, and 5% individual value drift rate).
Ah, good point that we should distinguish the discourse from the behaviours, and that what I said is clearer for the discourse than for the behaviours. I actually intended those sentences to just be about the discourse, but I didn’t make that clear. (I’ve now edited those sentences.)
Also, whether people’s discourse is at an appropriate point is probably less decision-relevant than whether their actions are, because:
it might be more worthwhile to try to push their actions towards the appropriate balance than to push their discourse towards the appropriate balance
we might want to oversteer one way or the other to compensate for what other people are doing (and this is somewhat less true regarding what people are saying)
Unfortunately, I find it very hard to say whether EAs’ actions are, in aggregate, overemphasising “direct work now”, overemphasising punting, or striking roughly the right balance. (Alternative terms would be “too urgent” vs “too patient” vs roughly right.) This is because I don’t have a strong sense of what balance EAs are currently striking or of what balance they should be striking. (Though I’ve found your work helpful on the latter point.)
Also, I realise now that I’m basing my assessment of EA’s discourse primarily on what I see on the forum and what I hear from the EAs I speak to, who are mostly highly engaged. This probably gives me a misleading picture, as ideas probably diffuse faster to these groups than to EAs in general.