The relevant section of this post does appear to be discussing financial investments, or at least primarily focusing on that. But that wasnât Trammellâs sole focus. As he states in his 80k interview:
Philip Trammell: [...] in this write-up, I do try to make it clear that by investment, I really am explicitly including things like fundraising and at least certain kinds of movement building which have the same effect of turning resources now, not into good done now, but into more resources next year with which good will be done. I would be just a little careful to note that this has to be the sort of movement building advocacy work that really does look like fundraising in the sense that youâre not just putting more resources toward the cause next year, but toward the whole mindset of either giving to the cause or investing to give more in two yearsâ time to the cause. You might spend all your money and get all these recruits who are passionate about the cause that youâre trying to fund, but then they just do it all next year.
Robert Wiblin: The fools!
Philip Trammell: Right. And I donât know exactly how high fidelity in this respect movement building tends to be or EA movement building in particular has been. So thatâs one caveat. I guess another one is that when youâre actually investing, youâre generally creating new resources. Youâre actually building the factories or whatever. Whereas when youâre just doing fundraising, youâre movement building, youâre just diverting resources from where they otherwise would have gone.
Robert Wiblin: Youâre redistributing from some efforts to others.
Philip Trammell: Yeah. And so you have to think that what people otherwise would have done with the resources in question is of negligible value compared to what theyâll do after the funds had been put in your pot. And you might think that if you just look at what people are spending their money on, the world as a whole⌠I mean you might not, but you might. And if you do, it might seem like this is a safe assumption to make, but the sorts of people youâre most likely to recruit are the ones who probably were most inclined to do the sort of thing that you wanted anyway on their own. My intuition is that itâs easy to overestimate the real real returns to advocacy and movement building in this respect. But I havenât actually looked through any detailed numbers on this. Itâs just a caveat I would raise.
---
Iâm currently working on two drafts relevant to these topics, with the working titles âA typology of strategies for influencing the futureâ and âCrucial questions about optimal timing of work and donationsâ. Iâll quote below my current attempt from one of those drafts to make a distinction between âpresent-influenceâ actions (this term may be replaced) and âpunting to the futureâ actions. (I plan to adjust this attempt soon, or at least to add a causal diagram to make things clearer.)
---
MacAskill has discussed whether weâre living at the âmost influential time in historyâ, for which he proposed the following definition:
a time ti is more influential (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.
He writes that the most obvious implication of this is:
regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call âbuck-passingâ strategies like saving or movement-building. If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some future, more influential, time comes.
FollowingTomasik, Iâll refer to âbuck-passingâ strategies as âpunting to the futureâ.
There were many comments on MacAskillâs post about the difficulties of distinguishing âbuck-passingâ strategies from other strategies. It can also seem hard to distinguish this from the ânarrow vs broadâ dimension and an âobject-level vs meta-levelâ dimension [these are two other distinctions I discuss in this draft]. But I think we can resolve these issues by drawing on this comment from Jan Brauner:
Punting strategies, in contrast, affect future generations [primarily] via their effect on the people alive in the most influential centuries.
Here are my proposed terms and definitions: Thereâs a continuum from present-influence actions to punting to the future actions. Present-influence actions are intended to âquite soonâ result in âdirect impactsâ[...]. Relatively clear examples include:
Doing AI safety research yourself to directly reduce existential risk.
Providing productivity coaching to AI safety researchers.
Meanwhile, punting to the future actions are intended to result in âdirect impactsâ primarily via actions taken âa long timeâ from now, which the punting to the future actions somehow supported. [...]
One relatively clear example of a punting to the future action is investing money so that, decades from now, youâll be able to donate to support AI safety research or movement-building. I also think it makes sense to imagine punting to your own future self, such as by doing a PhD so you can have more impact in âdirect workâ later, rather than doing âdirect workâ now.
However, the division isnât sharp, because:
all actions would have their influence at least slightly in the future
many actions will have multiple pathways to impact, some taking little time and others stretching over longer times
For example, AI safety movement-building and existential risk strategy research could be intended to result in âdirect impactsâ (after several steps) both decades from now and within years, although probably not within weeks or months. Such actions could be seen as landing somewhere in the middle of the âpresent-influence to puntingâ dimension, and/âor as having a âpresent-influenceâ component in addition to a âpunting to the futureâ component. Indeed, even some people doing AI safety research themselves may be doing so partly or entirely for movement-building reasons, such as to attract funding and talent by showing that progress on these questions is possible and concrete work is being done (see Ord).
---
If anyone would like to see (and perhaps provide feedback on) either or both of those drafts Iâm working on, let me know.
The relevant section of this post does appear to be discussing financial investments, or at least primarily focusing on that. But that wasnât Trammellâs sole focus. As he states in his 80k interview:
---
Iâm currently working on two drafts relevant to these topics, with the working titles âA typology of strategies for influencing the futureâ and âCrucial questions about optimal timing of work and donationsâ. Iâll quote below my current attempt from one of those drafts to make a distinction between âpresent-influenceâ actions (this term may be replaced) and âpunting to the futureâ actions. (I plan to adjust this attempt soon, or at least to add a causal diagram to make things clearer.)
---
MacAskill has discussed whether weâre living at the âmost influential time in historyâ, for which he proposed the following definition:
He writes that the most obvious implication of this is:
Following Tomasik, Iâll refer to âbuck-passingâ strategies as âpunting to the futureâ.
There were many comments on MacAskillâs post about the difficulties of distinguishing âbuck-passingâ strategies from other strategies. It can also seem hard to distinguish this from the ânarrow vs broadâ dimension and an âobject-level vs meta-levelâ dimension [these are two other distinctions I discuss in this draft]. But I think we can resolve these issues by drawing on this comment from Jan Brauner:
Here are my proposed terms and definitions: Thereâs a continuum from present-influence actions to punting to the future actions. Present-influence actions are intended to âquite soonâ result in âdirect impactsâ[...]. Relatively clear examples include:
Doing AI safety research yourself to directly reduce existential risk.
Providing productivity coaching to AI safety researchers.
Meanwhile, punting to the future actions are intended to result in âdirect impactsâ primarily via actions taken âa long timeâ from now, which the punting to the future actions somehow supported. [...]
One relatively clear example of a punting to the future action is investing money so that, decades from now, youâll be able to donate to support AI safety research or movement-building. I also think it makes sense to imagine punting to your own future self, such as by doing a PhD so you can have more impact in âdirect workâ later, rather than doing âdirect workâ now.
However, the division isnât sharp, because:
all actions would have their influence at least slightly in the future
many actions will have multiple pathways to impact, some taking little time and others stretching over longer times
For example, AI safety movement-building and existential risk strategy research could be intended to result in âdirect impactsâ (after several steps) both decades from now and within years, although probably not within weeks or months. Such actions could be seen as landing somewhere in the middle of the âpresent-influence to puntingâ dimension, and/âor as having a âpresent-influenceâ component in addition to a âpunting to the futureâ component. Indeed, even some people doing AI safety research themselves may be doing so partly or entirely for movement-building reasons, such as to attract funding and talent by showing that progress on these questions is possible and concrete work is being done (see Ord).
---
If anyone would like to see (and perhaps provide feedback on) either or both of those drafts Iâm working on, let me know.