This is a particularly hot topic when it comes down to near-term vs long-term causes; Do we think humans today morally matter more than humans in 10,000 years, and how, if at all, should we discount the value of humans over time?
Is there much debate on this? I’d expect most EAs to answer ‘no’ and ‘discount rate=0’.
I’d expect more debate over the tractability of longtermist interventions.
As an EA group facilitator, I’ve been a part of many complex discussions talking about the tradeoffs between prioritizing long-term and short-term causes.
Even though I consider myself a longtermist, I now have a better understanding and respect for the concerns that near-term-focused EAs bring up. Allow me to share a few of them.
The world has finite resources, so when you direct resources to long-term causes, those same resources cannot be put towards short-term causes. If the EA community was 100% focused on the very long term, for example, then it’s likely that solvable problems in the near-term affecting millions or billions of people would get less attention and resources, even if they were easy to solve. This is especially true as EA gets bigger, having a more outsized impact on where resources are directed. As this post says, marginal reasoning becomes less valid as EA gets larger.
Some long-term EA cause areas may increase the risk of negative outcomes in the near-term. For example, people working on AI safety often collaborate with and even contribute to capabilities research. AI is already a very disruptive technology and will likely be even moreso as its capabilities become more powerful.
People who think “x-risk is all that matters” may be discounting other kinds of risks, such as s-risks (suffering risks) due to dystopian futures. If we prioritize x-risk while allowing global catastrophic risks (GCRs) to increase (that is, risks which don’t wipe out humanity but greatly set back civilization), that increases s-risks because it’s very hard to have well-functioning institutions and governments in a world crippled by war, famine, and other problems.
These and other concerns have updated me towards preferring a “balanced portfolio” of resources spread across EA causes from different worldviews, even if my inside view prefers certain causes over others.
If the EA community was 100% focused on the very long term, for example, then it’s likely that solvable problems in the near-term affecting millions or billions of people would get less attention and resources, even if they were easy to solve.
This is directly captured by the ITC framework: as longtermist interventions are funded and hit diminishing returns, then neartermist ones will have the highest marginal utility per dollar. (Usually, MU/$ is a diminishing function of spending, so the top-ranked intervention will change as funding changes.)
Yes my bad! This is actually what I meant e.g. the epistemic uncertainty around longtermist interventions makes it challenging to determine funding allocation. Will amend this, thank you!
Is there much debate on this? I’d expect most EAs to answer ‘no’ and ‘discount rate=0’.
I’d expect more debate over the tractability of longtermist interventions.
As an EA group facilitator, I’ve been a part of many complex discussions talking about the tradeoffs between prioritizing long-term and short-term causes.
Even though I consider myself a longtermist, I now have a better understanding and respect for the concerns that near-term-focused EAs bring up. Allow me to share a few of them.
The world has finite resources, so when you direct resources to long-term causes, those same resources cannot be put towards short-term causes. If the EA community was 100% focused on the very long term, for example, then it’s likely that solvable problems in the near-term affecting millions or billions of people would get less attention and resources, even if they were easy to solve. This is especially true as EA gets bigger, having a more outsized impact on where resources are directed. As this post says, marginal reasoning becomes less valid as EA gets larger.
Some long-term EA cause areas may increase the risk of negative outcomes in the near-term. For example, people working on AI safety often collaborate with and even contribute to capabilities research. AI is already a very disruptive technology and will likely be even moreso as its capabilities become more powerful.
People who think “x-risk is all that matters” may be discounting other kinds of risks, such as s-risks (suffering risks) due to dystopian futures. If we prioritize x-risk while allowing global catastrophic risks (GCRs) to increase (that is, risks which don’t wipe out humanity but greatly set back civilization), that increases s-risks because it’s very hard to have well-functioning institutions and governments in a world crippled by war, famine, and other problems.
These and other concerns have updated me towards preferring a “balanced portfolio” of resources spread across EA causes from different worldviews, even if my inside view prefers certain causes over others.
This is directly captured by the ITC framework: as longtermist interventions are funded and hit diminishing returns, then neartermist ones will have the highest marginal utility per dollar. (Usually, MU/$ is a diminishing function of spending, so the top-ranked intervention will change as funding changes.)
Yes my bad! This is actually what I meant e.g. the epistemic uncertainty around longtermist interventions makes it challenging to determine funding allocation. Will amend this, thank you!