Does longtermism vs neartermism boil down to cases of tiny probabilities of x-risk?
When P(x-risk) is high, then both longtermists and neartermists max out their budgets on it. We have convergence.
When P(x-risk) is low, then the expected value is low for neartermists (since they only care about the next ~few generations) and high for longtermists (since they care about all future generations). Here, longtermists will focus on x-risks, while neartermists won’t.
I think for moderate to high levels of x-risk, another potential divergence is that while both longtermism and non-longtermism axiologies will lead you to believe that large scale risk prevention and mitigation is important, specific actions people take may be different. For example:
non-longtermism axiologies will all else equal be much more likely to prioritize non-existential GCRs over existential
mitigation (especially worst-case mitigation) for existential risks is comparatively more important for longtermists than for non-longtermists.
Some of these divergences were covered at least as early as Parfit (1982). (Note: I did not reread this before making this comment).
I agree that these divergences aren’t very strong for traditional AGI x-risk scenarios, in those cases I think whether and how much you prioritize AGI x-risk depends almost entirely on empirical beliefs.
Agreed, that’s another angle. NTs will only have a small difference between non-extinction-level catastrophes and extinction-level catastrophes (eg. a nuclear war where 1000 people survive vs one that kills everyone), whereas LTs will have a huge difference between NECs and ECs.
But again, whether non-extinction catastrophe or extinction catastrophe, if the probabilities are high enough, then both NTs and LTs will be maxing out their budgets, and will agree on policy. It’s only when the probabilities are tiny that you get differences in optimal policy.
I think you are very interested in cause area selection, in the sense of how these different cause areas can be “rationally allocated” in some sort of normative, analytical model that can be shared and be modified.
For example, you might want such a model because you can then modify underlying parameters to create new allocations. If the model is correct and powerful, this process would illuminate what these parameters and assumptions are, laying bare and reflecting underlying insights of the world, and allowing different expression of values and principles of different people.
The above analytical model is in contrast to a much more atheoretical “model”, where resources are allocated by the judgement of a few people who try to choose between causes in a modest and principled way.
I’m not sure your goal is possible.
In short, it seems the best that can be done is for resources to be divided up, in a way bends according to principled but less legible decisions made by the senior leaders. This seems fine, or at least the best we can do.
Below are some thoughts about this. The first two points sort of “span or touch on” considerations, while I think Cotra’s points are the best place to start from.
Something that I think confuses even people who spend a lot of time engaging with EA material, is that EA is not really quite a method to find cause areas and interventions. It’s a social movement that has found several cause areas and interventions.
I think one key difference this perspective brings is that cause areas and new kinds of interventions are limited by the supply of high quality leadership, management and judgement, and somewhat less that they haven’t been “discovered” or “researched”, in the sense we could just write about it.
Another key difference is that the existing, found cause areas, are often influenced by historical reasons. So they aren’t an absolute guide to what’s should be done.
This post by Applied Divinity Studies (which I suspected is being arch and slightly subversive) asking about what EAs on the forum (much less the public) are supposed to do to inform funding decisions, if at all.
(This probably isn’t the point ADS wanted to make or would agree with) but my takeaway is that judgement on any cause is hard and valuable and EA forum discussion is underpowered and largely ineffective.
It raises the question: What is the role and purpose of this place? Which I’ve tried to interrogate and understand (my opinion has fallen with new updates and reached a sort of solipsistic nadir).
Ajeya Cotra’s reasoning in this interview basically addresses your question most directly and seems close to the best a human being or EA can do right now:
the next dollar it would spend would actually be aiming to help humans. But I really don’t see it for the longtermist versus near-termist worldview, because of the massive differences in scale posited, the massive differences in scale of the world of moral things posited, and because the longtermist worldview could always just sit and wait.
I think we’ve moved into a bit more of an atheoretical perspective, where there may be a larger number of buckets of giving, and each of those buckets of giving will have some estimate of cost effectiveness in terms of its own kind of native units.
So, we’re kind of moving into something where we sort of talk tranche by tranche...and we try to do our best to do the math on them in terms of the unit of value purchase per dollar, and then also think about these other intangibles and argue really hard to come to a decision about each bucket.
What she’s sort of saying is that the equations in each cause area don’t touch eachother. There are buckets or worlds where cost effectiveness and long term effects are different.
Note that this isn’t the whole picture at all. Many people, including some of the major funders or leaders, point out conjunctions or value from one cause area to another, as you’ve seen with MacAskill but this happens with everyone. It’s pretty complicated though.
So yeah, anyways, if there was a purely cause neutral EA, trying to figure this all out, I would start from Ajeya Cotra’s reasoning.
Yes, I think of EA as optimally allocating a budget to maximize social welfare, analogous to the constrained utility maximization problem in intermediate microeconomics.
The worldview diversification problem is in putting everything in common units (eg. comparing human and animal lives, or comparing current and future lives). Uncertainty over these ‘exchange rates’ translates into uncertainty in our optimal budget allocation.
I just wrote out at least one good reference that EAs can’t really stick things in common units.
It’s entirely possible I’m wrong, but from my personal perspective, as a general principle it seems like a good idea to identify where I’m wrong or even just describe how your instincts tell you to do something different, which can be valid.
I mean for one thing, you get “fanaticism” AKA “corner solutions” for most reductive attempts to constrain max this thingy.
Does longtermism vs neartermism boil down to cases of tiny probabilities of x-risk?
When P(x-risk) is high, then both longtermists and neartermists max out their budgets on it. We have convergence.
When P(x-risk) is low, then the expected value is low for neartermists (since they only care about the next ~few generations) and high for longtermists (since they care about all future generations). Here, longtermists will focus on x-risks, while neartermists won’t.
I think for moderate to high levels of x-risk, another potential divergence is that while both longtermism and non-longtermism axiologies will lead you to believe that large scale risk prevention and mitigation is important, specific actions people take may be different. For example:
non-longtermism axiologies will all else equal be much more likely to prioritize non-existential GCRs over existential
mitigation (especially worst-case mitigation) for existential risks is comparatively more important for longtermists than for non-longtermists.
Some of these divergences were covered at least as early as Parfit (1982). (Note: I did not reread this before making this comment).
I agree that these divergences aren’t very strong for traditional AGI x-risk scenarios, in those cases I think whether and how much you prioritize AGI x-risk depends almost entirely on empirical beliefs.
Agreed, that’s another angle. NTs will only have a small difference between non-extinction-level catastrophes and extinction-level catastrophes (eg. a nuclear war where 1000 people survive vs one that kills everyone), whereas LTs will have a huge difference between NECs and ECs.
But again, whether non-extinction catastrophe or extinction catastrophe, if the probabilities are high enough, then both NTs and LTs will be maxing out their budgets, and will agree on policy. It’s only when the probabilities are tiny that you get differences in optimal policy.
I think you are very interested in cause area selection, in the sense of how these different cause areas can be “rationally allocated” in some sort of normative, analytical model that can be shared and be modified.
For example, you might want such a model because you can then modify underlying parameters to create new allocations. If the model is correct and powerful, this process would illuminate what these parameters and assumptions are, laying bare and reflecting underlying insights of the world, and allowing different expression of values and principles of different people.
The above analytical model is in contrast to a much more atheoretical “model”, where resources are allocated by the judgement of a few people who try to choose between causes in a modest and principled way.
I’m not sure your goal is possible.
In short, it seems the best that can be done is for resources to be divided up, in a way bends according to principled but less legible decisions made by the senior leaders. This seems fine, or at least the best we can do.
Below are some thoughts about this. The first two points sort of “span or touch on” considerations, while I think Cotra’s points are the best place to start from.
The bottom half of this following comment tries to elaborate what is going on, as one of several points, this might be news to you:
This post by Applied Divinity Studies (which I suspected is being arch and slightly subversive) asking about what EAs on the forum (much less the public) are supposed to do to inform funding decisions, if at all.
(This probably isn’t the point ADS wanted to make or would agree with) but my takeaway is that judgement on any cause is hard and valuable and EA forum discussion is underpowered and largely ineffective.
It raises the question: What is the role and purpose of this place? Which I’ve tried to interrogate and understand (my opinion has fallen with new updates and reached a sort of solipsistic nadir).
Ajeya Cotra’s reasoning in this interview basically addresses your question most directly and seems close to the best a human being or EA can do right now:
What she’s sort of saying is that the equations in each cause area don’t touch eachother. There are buckets or worlds where cost effectiveness and long term effects are different.
Note that this isn’t the whole picture at all. Many people, including some of the major funders or leaders, point out conjunctions or value from one cause area to another, as you’ve seen with MacAskill but this happens with everyone. It’s pretty complicated though.
So yeah, anyways, if there was a purely cause neutral EA, trying to figure this all out, I would start from Ajeya Cotra’s reasoning.
Yes, I think of EA as optimally allocating a budget to maximize social welfare, analogous to the constrained utility maximization problem in intermediate microeconomics.
The worldview diversification problem is in putting everything in common units (eg. comparing human and animal lives, or comparing current and future lives). Uncertainty over these ‘exchange rates’ translates into uncertainty in our optimal budget allocation.
Bruh.
I just wrote out at least one good reference that EAs can’t really stick things in common units.
It’s entirely possible I’m wrong, but from my personal perspective, as a general principle it seems like a good idea to identify where I’m wrong or even just describe how your instincts tell you to do something different, which can be valid.
I mean for one thing, you get “fanaticism” AKA “corner solutions” for most reductive attempts to constrain max this thingy.
I agree that it’s a difficult problem, but I’m not sure that it’s impossible.
I don’t know much about anything really, but IMO it seems really great that you are interested.
There are many people with the same thoughts or interests as you. It will be interesting to see what you come up with.
Appreciate your support!