That’s a great question. Longtermists look to impact the far future (even thousands/million of years in the future) rather than the nearish future because they think the future could be very long, so there’s a lot more value at stake looking far out.
They also think there are tangible, near-term decisions (e.g. about AI, space governance etc.) that could lock in values or institutions and shape civilization’s long-run trajectory in predictable ways. You can read more on this in essay 4 “Persistent Path-Dependence”.
Ultimately, it just isn’t clear how things like saving/improving lives now will influence the far future trajectory, so these aren’t typically prioritized by longtermists.
Okay, so a simple gloss might be something like “better futures work is GHW for longtermists”?
In other words, I take it there’s an assumption that people doing standard EA GHW work are not acting in accordance with longtermist principles. But fwiw, I get the sense that plenty of people who work on GHW are sympathetic to longtermism, and perhaps think—rightly or wrongly—that doing things like facilitating the development of meat alternatives will, in expectation, do more to promote the flourishing of sentient creatures far into the future than, say, working on space governance.
I think generally GHW people don’t think you can predictably influence the far future because effects “wash out” over time, or think trying to do so is fanatical (you’re betting on an extremely small chance of very large payoff).
If you look at, for example, GiveWell’s cost-effectiveness analyses, effects in the far future don’t feature. If they thought most of the value of saving a life was in the far future you would think they would incorporate that. Same goes for analyses by Animal Charity Evaluators.
Longtermists think they can find interventions that avoid the washing out objection. Essay 4 of the series goes into this, also see the shorter summary.
That’s a great question. Longtermists look to impact the far future (even thousands/million of years in the future) rather than the nearish future because they think the future could be very long, so there’s a lot more value at stake looking far out.
They also think there are tangible, near-term decisions (e.g. about AI, space governance etc.) that could lock in values or institutions and shape civilization’s long-run trajectory in predictable ways. You can read more on this in essay 4 “Persistent Path-Dependence”.
Ultimately, it just isn’t clear how things like saving/improving lives now will influence the far future trajectory, so these aren’t typically prioritized by longtermists.
Okay, so a simple gloss might be something like “better futures work is GHW for longtermists”?
In other words, I take it there’s an assumption that people doing standard EA GHW work are not acting in accordance with longtermist principles. But fwiw, I get the sense that plenty of people who work on GHW are sympathetic to longtermism, and perhaps think—rightly or wrongly—that doing things like facilitating the development of meat alternatives will, in expectation, do more to promote the flourishing of sentient creatures far into the future than, say, working on space governance.
I think generally GHW people don’t think you can predictably influence the far future because effects “wash out” over time, or think trying to do so is fanatical (you’re betting on an extremely small chance of very large payoff).
If you look at, for example, GiveWell’s cost-effectiveness analyses, effects in the far future don’t feature. If they thought most of the value of saving a life was in the far future you would think they would incorporate that. Same goes for analyses by Animal Charity Evaluators.
Longtermists think they can find interventions that avoid the washing out objection. Essay 4 of the series goes into this, also see the shorter summary.