Thanks for writing this up, Oscar! I largely disagree with the (admittedly tentative) conclusions, and am not sure how apt I find the NIMBY analogy. But even so, I found the ideas in the post helpfully thought-provoking, especially given that I would probably fall into the cosmic NIMBY category as you describe it.
First, on the implications you list. I think I would be quite concerned if some of your implications were adopted by many longtermists (who would otherwise try to do good differently):
Support pro-expansion space exploration policies and laws
Even accepting the moral case for cosmic YIMBYism (that aiming for a large future is morally warranted), it seems far from clear to me that support for pro-expansion space exploration policies would actually improve expected wellbeing for the current and future world. Such policies & laws could share many of the downsides colonialism and expansionism have had previously:
Exploitation of humans & the environment for the sake of funding and otherwise enabling these explorations;
Planning problems: Colonial-esque megaprojects like massive space exploration likely constitute a bigger task than human planners can reasonably take on, leading to large chances of catastrophic errors in planning & execution (as evidenced by past experiences with colonialism and similarly grand but elite-driven endeavours)
Power dynamics: Colonial-esque megaprojects like massive space exploration seem prone to reinforcing the prestige, status, and power for those people who are capable of and willing to support these grand endeavours, who—when looking at historical colonial-esque megaprojects—do not have a strong track record of being the type of people well-suited to moral leadership and welfare-enhancing actions (you do acknowledge this when you talk about ruthless expansionists and Molochian futures, but I think it warrants more concern and worry than you grant);
(Exploitation of alien species (if there happened to be any, which maybe is unlikely? I have zero knowledge about debates on this)).
This could mean that it is more neglected and hence especially valuable for longtermists to focus on making the future large conditional on there being no existential catastrophe, compared to focusing on reducing the chance of an existential catastrophe.
It seems misguided and, to me, dangerous to go from “extinction risk is not the most neglected thing” to “we can assume there will be no extinction and should take actions conditional on humans not going extinct”. My views on this are to some extent dependent on empirical beliefs which you might disagree with (curious to hear your response there!): I think humanity’s chances to avert global catastrophe in the next few decades are far from comfortably high, and I think the path from global catastrophe to existential peril is largely unpredictable but it doesn’t seem completely unconceivable that such a path will be taken. I think there are far too few earnest, well-considered, and persistent efforts to reduce global catastrophic risks at present. Given all that, I’d be quite distraught to hear that a substantial fraction (or even a few members) of those people concerned about the future would decide to switch from reducing x-risk (or global catastrophic risk) to speculatively working on “increasing the size of the possible future”, on the assumption that there will be no extinction-level event to preempt that future in the first place.
---
On the analogy itself: I think it doesn’t resonate super strongly (though it does resonate a bit) with me because my definition of and frustration with local NIMBYs is different from what you describe in the post.
In my reading, NIMBYism is objectionable primarily because it is a short-sighted and unconstructive attitude that obstructs efforts to combat problems that affect all of us; the thing that bugs me most about NIMBYs is not their lack of selflessness but their failure to understand that everyone, including themselves, would benefit from the actions they are trying to block. For example, NIMBYs objecting to high-rise apartment buildings seem to me to be mistaken in their belief that such buildings would decrease their welfare: the lack of these apartment buildings will make it harder for many people to find housing, which exacerbates problems of homelessness and local poverty, which decreases living standards for almost everyone living in that area (incl. those who have the comfort of a spacious family house, unless they are amongst the minority who enjoy or don’t mind living in the midst of preventable poverty and, possibly, heightened crime). It is a stubborn blindness to arguments of that kind and an unwillingness to consider common, longer-term needs over short-term, narrowly construed self-interests that form the core characteristic of local NIMBYs in my mind.
The situation seems to be different for the cosmic NIMBYs you describe. I might well be working with an unrepresentative sample, but most of the people I know/have read who consciously reject cosmic YIMBYism do so not primarily on grounds of narrow self-interest but for moral reasons (population ethics, non-consequentialist ethics, etc) or empirical reasons (incredibly low tractability of today’s efforts to influence the specifics about far-future worlds; fixing present/near-future concerns as the best means to increase wellbeing overall, including in the far future). I would be surprised if local NIMBYs were motivated by similar concerns, and I might actually shift my assessment of local NIMBYism if it turned out that they are.
Thanks for this really thoughtful engagement! I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. Perhaps I failed to realise how controversial and provocative these ideas would be after playing with them myself and with a few relatively similar people. Onto the substance:
That makes sense to me that the analogy is a bit weak, I think I mostly agree. I think the strongest part of the analogy to me is less the NIMBYs themselves and more who is politically empowered (a smaller group that is better coordinated—and actually existing—than the larger group of possible beneficiaries). Maybe I should have foregrounded this more actually.
Re space expansion/colonisation, yeah I don’t have much idea about how all this would work, so it is intuition-based. It is interesting I think how people have such different intuitive reactions to space expansion, I think roughly along the lines of pro-market, pro-”progress”, technologist, capitalist types (partially including me) pattern match space exploration to other things they like and intuitively like. Whereas environmentalists, localists, post-colonialists, social justice-oriented people, degrowthers etc (also partially including me, but to a lesser extent probably) are intuitively pretty opposed. But I think it is reasonable to at least be worried about the socio-political consequences of a space focus—not at all sure how it would play out and I am probably somewhat more optimistic than you, but yes your worries seem plausible.
I completely agree there are far too few people working on x-risks, and that there should be far more, and collapse is dangerous and scary, and that we are very much not out of the woods and things could go terribly. I suppose it is the nature of being scope-sensitive and prioritiationist though that something being very important and neglected and moderately tractable (like x-risk work) isn’t always enough for it to be the ‘best’ (granted re your previous post that this may not makes sense). I’m not sure if this is what you had in mind, but I think there is some significance to risk-averse decision-making principles, where maybe avoiding extinction is especially important even compared to building (an even huger) utopia. So I think I have less clear views on what practically is best for people like me to be doing (for now I will continue to focus on catastrophic and existential risks). But I still think in principle it could be reasonable to focus on making a great future even larger and greater, even if that is unlikely. Another, perhaps tortured, analogy: you have founded a company, and could spend all your time trying to avoid going bankrupt and mitigating risks, but maybe some employee should spend some fraction of their time thinking about best-case scenarios and how you could massively expand and improve the company 5 years down the line if everything else falls into place nicely.
As a process note, I think these discussions are a lot easier and better to have when we are (I think) both confident the other person is well-meaning and thoughtful and altruistic, I think otherwise it would be a lot easier to dismiss prematurely ideas I disagree with or find uncomfortable. So in other words I’m really glad I know you :)
First two points sound reasonable (and helpfully clarifying) to me!
I suppose it is the nature of being scope-sensitive and prioritiationist though that something being very important and neglected and moderately tractable (like x-risk work) isn’t always enough for it to be the ‘best’
I share the guess that scope sensitivity and prioritarianism could be relevant here, as you clearly (I think) endorse these more strongly and more consistently than I do; but having thought about it for only 5-10 minutes, I’m not sure I’m able to exactly point at how these notions play into our intuitions and views on the topic—maybe it’s something about me ignoring the [(super-high payoff of larger future)*(super-low probability of affecting whether there is a larger future) = (there is good reason to take this action)] calculation/conclusion more readily?
That said, I fully agree that “something being very important and neglected and moderately tractable (like x-risk work) isn’t always enough for it to be the ‘best’ ”. To figure out which option is best, we’d need to somehow compare their respective scores on importance, neglectedness, and tractability… I’m not sure actually figuring that out is possible in practice, but I think it’s fair to challenge the claim that “action X is best because it is very important and neglected and moderately tractable” regardless. In spite of that, I continue to feel relatively confident in claiming that efforts to reduce x-risks are better (more desirable) than efforts to increase the probable size of the future, because the former is an unstable precondition for the latter (and because I strongly doubt the tractability and am at least confused about the desirability of the latter).
Another, perhaps tortured, analogy: you have founded a company, and could spend all your time trying to avoid going bankrupt and mitigating risks, but maybe some employee should spend some fraction of their time thinking about best-case scenarios and how you could massively expand and improve the company 5 years down the line if everything else falls into place nicely.
I think my stance on this example would depend on the present state of the company. If the company is in really dire straits, I’m resource-constrained, and there are more things that need fixing now than I feel able to easily handle, I would seriously question whether one of my employees should go thinking about making best-case future scenarios the best they can be[1]. I would question this even more strongly if I thought that the world and my company (if it survives) will change so drastically in the next 5 years that the employee in question has very little chance of imaging and planning for the eventuality.
(I also notice while writing that a part of my disagreement here is motivated by values rather than logic/empirics: part of my brain just rejects the objective of massively expanding and improving a company/situation that is already perfectly acceptable and satisfying. I don’t know if I endorse this intuition for states of the world (I do endorse it pretty strongly for private life choices), but can imagine that the intuitive preference for satisficing informs/shapes/directs my thinking on the topic at least a bit—something for myself to think about more, since this may or may not be a concerning bias.)
I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. [...] As a process note, I think these discussions are a lot easier and better to have when we are (I think) both confident the other person is well-meaning and thoughtful and altruistic, I think otherwise it would be a lot easier to dismiss prematurely ideas I disagree with or find uncomfortable.
(This is not to say that it might not make sense for one or a few individuals to think about the company’s mid- to long-term success; I imagine that type of resource allocation will be quite sensible in most cases, because it’s not sustainable to preserve the company in a day-to-day survival strategy forever; but I think that’s different from asking these individuals to paint a best-case future to be prepared to make a good outcome even better.)
That makes sense, yes perhaps there are some fanaticism worries re my make-the-future large approach even more so than x-risk work, and maybe I am less resistant to fanaticism-flavoured conclusions than you. That said I think not all work like this need be fanatical—e.g. improving international cooperation and treaties for space exploration could be good in more frames (and bad is some frames you brought up, granted).
I don’t know lots about it, but I wonder if you prefer more of a satisficing decision theory where we want to focus on getting a decent outcome rather than necessarily the best (e.g. Bostrom’s ‘Maxipok’ rule). So I think not wholeheartedly going for maximum expected value isn’t a sign of irrationality, and could reflect different, sound, decision approaches.
Thanks for writing this up, Oscar! I largely disagree with the (admittedly tentative) conclusions, and am not sure how apt I find the NIMBY analogy. But even so, I found the ideas in the post helpfully thought-provoking, especially given that I would probably fall into the cosmic NIMBY category as you describe it.
First, on the implications you list. I think I would be quite concerned if some of your implications were adopted by many longtermists (who would otherwise try to do good differently):
Even accepting the moral case for cosmic YIMBYism (that aiming for a large future is morally warranted), it seems far from clear to me that support for pro-expansion space exploration policies would actually improve expected wellbeing for the current and future world. Such policies & laws could share many of the downsides colonialism and expansionism have had previously:
Exploitation of humans & the environment for the sake of funding and otherwise enabling these explorations;
Planning problems: Colonial-esque megaprojects like massive space exploration likely constitute a bigger task than human planners can reasonably take on, leading to large chances of catastrophic errors in planning & execution (as evidenced by past experiences with colonialism and similarly grand but elite-driven endeavours)
Power dynamics: Colonial-esque megaprojects like massive space exploration seem prone to reinforcing the prestige, status, and power for those people who are capable of and willing to support these grand endeavours, who—when looking at historical colonial-esque megaprojects—do not have a strong track record of being the type of people well-suited to moral leadership and welfare-enhancing actions (you do acknowledge this when you talk about ruthless expansionists and Molochian futures, but I think it warrants more concern and worry than you grant);
(Exploitation of alien species (if there happened to be any, which maybe is unlikely? I have zero knowledge about debates on this)).
It seems misguided and, to me, dangerous to go from “extinction risk is not the most neglected thing” to “we can assume there will be no extinction and should take actions conditional on humans not going extinct”. My views on this are to some extent dependent on empirical beliefs which you might disagree with (curious to hear your response there!): I think humanity’s chances to avert global catastrophe in the next few decades are far from comfortably high, and I think the path from global catastrophe to existential peril is largely unpredictable but it doesn’t seem completely unconceivable that such a path will be taken. I think there are far too few earnest, well-considered, and persistent efforts to reduce global catastrophic risks at present. Given all that, I’d be quite distraught to hear that a substantial fraction (or even a few members) of those people concerned about the future would decide to switch from reducing x-risk (or global catastrophic risk) to speculatively working on “increasing the size of the possible future”, on the assumption that there will be no extinction-level event to preempt that future in the first place.
---
On the analogy itself: I think it doesn’t resonate super strongly (though it does resonate a bit) with me because my definition of and frustration with local NIMBYs is different from what you describe in the post.
In my reading, NIMBYism is objectionable primarily because it is a short-sighted and unconstructive attitude that obstructs efforts to combat problems that affect all of us; the thing that bugs me most about NIMBYs is not their lack of selflessness but their failure to understand that everyone, including themselves, would benefit from the actions they are trying to block. For example, NIMBYs objecting to high-rise apartment buildings seem to me to be mistaken in their belief that such buildings would decrease their welfare: the lack of these apartment buildings will make it harder for many people to find housing, which exacerbates problems of homelessness and local poverty, which decreases living standards for almost everyone living in that area (incl. those who have the comfort of a spacious family house, unless they are amongst the minority who enjoy or don’t mind living in the midst of preventable poverty and, possibly, heightened crime). It is a stubborn blindness to arguments of that kind and an unwillingness to consider common, longer-term needs over short-term, narrowly construed self-interests that form the core characteristic of local NIMBYs in my mind.
The situation seems to be different for the cosmic NIMBYs you describe. I might well be working with an unrepresentative sample, but most of the people I know/have read who consciously reject cosmic YIMBYism do so not primarily on grounds of narrow self-interest but for moral reasons (population ethics, non-consequentialist ethics, etc) or empirical reasons (incredibly low tractability of today’s efforts to influence the specifics about far-future worlds; fixing present/near-future concerns as the best means to increase wellbeing overall, including in the far future). I would be surprised if local NIMBYs were motivated by similar concerns, and I might actually shift my assessment of local NIMBYism if it turned out that they are.
Thanks for this really thoughtful engagement! I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. Perhaps I failed to realise how controversial and provocative these ideas would be after playing with them myself and with a few relatively similar people. Onto the substance:
That makes sense to me that the analogy is a bit weak, I think I mostly agree. I think the strongest part of the analogy to me is less the NIMBYs themselves and more who is politically empowered (a smaller group that is better coordinated—and actually existing—than the larger group of possible beneficiaries). Maybe I should have foregrounded this more actually.
Re space expansion/colonisation, yeah I don’t have much idea about how all this would work, so it is intuition-based. It is interesting I think how people have such different intuitive reactions to space expansion, I think roughly along the lines of pro-market, pro-”progress”, technologist, capitalist types (partially including me) pattern match space exploration to other things they like and intuitively like. Whereas environmentalists, localists, post-colonialists, social justice-oriented people, degrowthers etc (also partially including me, but to a lesser extent probably) are intuitively pretty opposed. But I think it is reasonable to at least be worried about the socio-political consequences of a space focus—not at all sure how it would play out and I am probably somewhat more optimistic than you, but yes your worries seem plausible.
I completely agree there are far too few people working on x-risks, and that there should be far more, and collapse is dangerous and scary, and that we are very much not out of the woods and things could go terribly. I suppose it is the nature of being scope-sensitive and prioritiationist though that something being very important and neglected and moderately tractable (like x-risk work) isn’t always enough for it to be the ‘best’ (granted re your previous post that this may not makes sense). I’m not sure if this is what you had in mind, but I think there is some significance to risk-averse decision-making principles, where maybe avoiding extinction is especially important even compared to building (an even huger) utopia. So I think I have less clear views on what practically is best for people like me to be doing (for now I will continue to focus on catastrophic and existential risks). But I still think in principle it could be reasonable to focus on making a great future even larger and greater, even if that is unlikely. Another, perhaps tortured, analogy: you have founded a company, and could spend all your time trying to avoid going bankrupt and mitigating risks, but maybe some employee should spend some fraction of their time thinking about best-case scenarios and how you could massively expand and improve the company 5 years down the line if everything else falls into place nicely.
As a process note, I think these discussions are a lot easier and better to have when we are (I think) both confident the other person is well-meaning and thoughtful and altruistic, I think otherwise it would be a lot easier to dismiss prematurely ideas I disagree with or find uncomfortable. So in other words I’m really glad I know you :)
First two points sound reasonable (and helpfully clarifying) to me!
I share the guess that scope sensitivity and prioritarianism could be relevant here, as you clearly (I think) endorse these more strongly and more consistently than I do; but having thought about it for only 5-10 minutes, I’m not sure I’m able to exactly point at how these notions play into our intuitions and views on the topic—maybe it’s something about me ignoring the [(super-high payoff of larger future)*(super-low probability of affecting whether there is a larger future) = (there is good reason to take this action)] calculation/conclusion more readily?
That said, I fully agree that “something being very important and neglected and moderately tractable (like x-risk work) isn’t always enough for it to be the ‘best’ ”. To figure out which option is best, we’d need to somehow compare their respective scores on importance, neglectedness, and tractability… I’m not sure actually figuring that out is possible in practice, but I think it’s fair to challenge the claim that “action X is best because it is very important and neglected and moderately tractable” regardless. In spite of that, I continue to feel relatively confident in claiming that efforts to reduce x-risks are better (more desirable) than efforts to increase the probable size of the future, because the former is an unstable precondition for the latter (and because I strongly doubt the tractability and am at least confused about the desirability of the latter).
I think my stance on this example would depend on the present state of the company. If the company is in really dire straits, I’m resource-constrained, and there are more things that need fixing now than I feel able to easily handle, I would seriously question whether one of my employees should go thinking about making best-case future scenarios the best they can be[1]. I would question this even more strongly if I thought that the world and my company (if it survives) will change so drastically in the next 5 years that the employee in question has very little chance of imaging and planning for the eventuality.
(I also notice while writing that a part of my disagreement here is motivated by values rather than logic/empirics: part of my brain just rejects the objective of massively expanding and improving a company/situation that is already perfectly acceptable and satisfying. I don’t know if I endorse this intuition for states of the world (I do endorse it pretty strongly for private life choices), but can imagine that the intuitive preference for satisficing informs/shapes/directs my thinking on the topic at least a bit—something for myself to think about more, since this may or may not be a concerning bias.)
+100 :)
(This is not to say that it might not make sense for one or a few individuals to think about the company’s mid- to long-term success; I imagine that type of resource allocation will be quite sensible in most cases, because it’s not sustainable to preserve the company in a day-to-day survival strategy forever; but I think that’s different from asking these individuals to paint a best-case future to be prepared to make a good outcome even better.)
That makes sense, yes perhaps there are some fanaticism worries re my make-the-future large approach even more so than x-risk work, and maybe I am less resistant to fanaticism-flavoured conclusions than you. That said I think not all work like this need be fanatical—e.g. improving international cooperation and treaties for space exploration could be good in more frames (and bad is some frames you brought up, granted).
I don’t know lots about it, but I wonder if you prefer more of a satisficing decision theory where we want to focus on getting a decent outcome rather than necessarily the best (e.g. Bostrom’s ‘Maxipok’ rule). So I think not wholeheartedly going for maximum expected value isn’t a sign of irrationality, and could reflect different, sound, decision approaches.