I think the moral assumptions dominate tractability/crowdedness considerations in practice, if you want to maximize the total QALYs of the universe. The current price of a life saved by malaria nets is $6,000. If we stand to have 10^40 lives, reducing x-risk is better on the margin as long as 0.00000000000000000000000000001% chance of doom is prevented by the next billion dollars, and this will basically always be true. (edit: on anything resembling our current earth, it would stop being true after we’re colonizing galaxies or something)
Under a moral parliament with fixed weights you also don’t get changes in allocation based on cost effectiveness of longtermist interventions, unless some portion of your moral parliament values preventing x-risk roughly as much as saving ~8 billion people. But if it’s only 8 billion lives, this portion is just not axiologically longtermist. To have a longtermist portion of your moral parliament stop allocating resources to making the long-term future go well as marginal cost-effectiveness declines, it has to think what’s at stake is 1-1000 times as important as saving 8 billion lives.
Basically, I’m claiming ITC + “long-term future is astronomically important” is not enough to get the EA community to actually change its “longtermist” interventions in practice, nor is ITC + moral parliament. This doesn’t mean we should stop allocating resources to preventing x-risk once it costs $1 billion per 0.0000001% or something, but we do need more assumptions.
reducing x-risk is better on the margin as long as 0.00000000000000000000000000001% chance of doom is prevented by the next billion dollars, and this will basically always be true.
So you think there won’t be diminishing returns to x-risk interventions?
Sorry if this was rude. Basically on the meta level, I’m (i) afraid of this being some rhetorical trap to pin me down on some position that turns out to be false by accident rather than a good-faith effort to find the truth, and (ii) a bit annoyed that this is taking multiple replies. So I want some assurance that you’re either being Socratic or trying to find what I believe rather than just trying to win an argument.
On the object level, I think returns diminish by a few orders of magnitude. I haven’t thought about my exact moral views in practice, and I’m mainly just observing that to get reallocation in response to a few orders of magnitude of diminishing returns, your morality has to have certain properties, and the two candidates that first came to my mind didn’t have these properties.
I think the moral assumptions dominate tractability/crowdedness considerations in practice, if you want to maximize the total QALYs of the universe. The current price of a life saved by malaria nets is $6,000. If we stand to have 10^40 lives, reducing x-risk is better on the margin as long as 0.00000000000000000000000000001% chance of doom is prevented by the next billion dollars, and this will basically always be true. (edit: on anything resembling our current earth, it would stop being true after we’re colonizing galaxies or something)
Under a moral parliament with fixed weights you also don’t get changes in allocation based on cost effectiveness of longtermist interventions, unless some portion of your moral parliament values preventing x-risk roughly as much as saving ~8 billion people. But if it’s only 8 billion lives, this portion is just not axiologically longtermist. To have a longtermist portion of your moral parliament stop allocating resources to making the long-term future go well as marginal cost-effectiveness declines, it has to think what’s at stake is 1-1000 times as important as saving 8 billion lives.
Basically, I’m claiming ITC + “long-term future is astronomically important” is not enough to get the EA community to actually change its “longtermist” interventions in practice, nor is ITC + moral parliament. This doesn’t mean we should stop allocating resources to preventing x-risk once it costs $1 billion per 0.0000001% or something, but we do need more assumptions.
So you think there won’t be diminishing returns to x-risk interventions?
Downvote for obviously misinterpreting me instead of making any number of potentially quite reasonable points directly.
Okay, so you think there are diminishing returns, but the magnitude is so small as to be irrelevant when comparing across causes?
Sorry if this was rude. Basically on the meta level, I’m (i) afraid of this being some rhetorical trap to pin me down on some position that turns out to be false by accident rather than a good-faith effort to find the truth, and (ii) a bit annoyed that this is taking multiple replies. So I want some assurance that you’re either being Socratic or trying to find what I believe rather than just trying to win an argument.
On the object level, I think returns diminish by a few orders of magnitude. I haven’t thought about my exact moral views in practice, and I’m mainly just observing that to get reallocation in response to a few orders of magnitude of diminishing returns, your morality has to have certain properties, and the two candidates that first came to my mind didn’t have these properties.