picture millions of humans kept in the equivalent for 100% of their adult lives, and suppose with some work we could free them: would you stick to your longtermist guns?
What is “longtermism” here? Only working on interventions aimed at helping future generations?
If we take longtermism to mean “not discounting future lives just because they’re in the future”, then it seems perfectly consistent to allocate some resources to alleviating suffering of currently-living people.
I don’t think this is that informative or substantive to the issue. Longtermism says that we should value/consider future generations distant in the future as no different than being physically far away or a different species.
In most calculations/beliefs/discounting used in Longtermism, this is plausibly much larger the consideration of current generations.
(The expression of this attitude is not fanatical or unreasonable, longtermists generally take normal virtuous actions, in the same way that neartermist EAs do not loot or pillage to donate to AMF).
Longtermists probably view the current resources dedicated to the causes as small (limited to current EA monies) and while it’s technically true, it’s unlikely longtermists will find the statement “that things exist now to spend money on” convincing in isolation.
In most calculations/beliefs/discounting used in Longtermism, this is plausibly much larger the consideration of current generations.
Accounting for importance, tractability (and diminishing marginal returns), and crowdedness, I’m skeptical that every single dollar has a higher marginal utility per dollar (MU/$) when allocated to longtermist interventions (compared to other causes). More plausibly, longtermist interventions start out with the highest MU/$, but as they are funded and hit diminishing returns, then other causes have the highest MU/$ and are optimally directed the marginal funding dollar.
Would you completely defund animal welfare causes and redirect their funding/workers to longtermist interventions?
Would you completely defund animal welfare causes and redirect their funding/workers to longtermist interventions?
I’m not sure that was the claim in the parent comment you responded to.
To see this another way, using an entirely adversarial framework, if that is easier to communicate:
If you take the worldview/mindset/logic in your comment (or being formalized and maybe codified in your posts) and move on to a duel (on the spreadsheets) with certain perspectives in longtermism, there is a danger of “losing” heavily, if a person just relies on “numbers” or “marginal utility” per dollar.
Sort of with that background in some sense, I thought the parent comment was helpful to give perspective.
duel (on the spreadsheets) with certain perspectives in longtermism, there is a danger of “losing” heavily, if a person just relies on “numbers” or “marginal utility” per dollar.
Can you elaborate? What do you mean by “losing”? Isn’t the case for longtermism that longtermist interventions have the highest combination of importance, tractability, and crowdedness (ie. the highest MU/$)?
What is “longtermism” here? Only working on interventions aimed at helping future generations?
If we take longtermism to mean “not discounting future lives just because they’re in the future”, then it seems perfectly consistent to allocate some resources to alleviating suffering of currently-living people.
I don’t think this is that informative or substantive to the issue. Longtermism says that we should value/consider future generations distant in the future as no different than being physically far away or a different species.
In most calculations/beliefs/discounting used in Longtermism, this is plausibly much larger the consideration of current generations.
(The expression of this attitude is not fanatical or unreasonable, longtermists generally take normal virtuous actions, in the same way that neartermist EAs do not loot or pillage to donate to AMF).
Longtermists probably view the current resources dedicated to the causes as small (limited to current EA monies) and while it’s technically true, it’s unlikely longtermists will find the statement “that things exist now to spend money on” convincing in isolation.
Accounting for importance, tractability (and diminishing marginal returns), and crowdedness, I’m skeptical that every single dollar has a higher marginal utility per dollar (MU/$) when allocated to longtermist interventions (compared to other causes). More plausibly, longtermist interventions start out with the highest MU/$, but as they are funded and hit diminishing returns, then other causes have the highest MU/$ and are optimally directed the marginal funding dollar.
Would you completely defund animal welfare causes and redirect their funding/workers to longtermist interventions?
I’m not sure that was the claim in the parent comment you responded to.
To see this another way, using an entirely adversarial framework, if that is easier to communicate:
If you take the worldview/mindset/logic in your comment (or being formalized and maybe codified in your posts) and move on to a duel (on the spreadsheets) with certain perspectives in longtermism, there is a danger of “losing” heavily, if a person just relies on “numbers” or “marginal utility” per dollar.
Sort of with that background in some sense, I thought the parent comment was helpful to give perspective.
Can you elaborate? What do you mean by “losing”? Isn’t the case for longtermism that longtermist interventions have the highest combination of importance, tractability, and crowdedness (ie. the highest MU/$)?