Thanks for this really thoughtful engagement! I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. Perhaps I failed to realise how controversial and provocative these ideas would be after playing with them myself and with a few relatively similar people. Onto the substance:
That makes sense to me that the analogy is a bit weak, I think I mostly agree. I think the strongest part of the analogy to me is less the NIMBYs themselves and more who is politically empowered (a smaller group that is better coordinatedâand actually existingâthan the larger group of possible beneficiaries). Maybe I should have foregrounded this more actually.
Re space expansion/âcolonisation, yeah I donât have much idea about how all this would work, so it is intuition-based. It is interesting I think how people have such different intuitive reactions to space expansion, I think roughly along the lines of pro-market, pro-âprogressâ, technologist, capitalist types (partially including me) pattern match space exploration to other things they like and intuitively like. Whereas environmentalists, localists, post-colonialists, social justice-oriented people, degrowthers etc (also partially including me, but to a lesser extent probably) are intuitively pretty opposed. But I think it is reasonable to at least be worried about the socio-political consequences of a space focusânot at all sure how it would play out and I am probably somewhat more optimistic than you, but yes your worries seem plausible.
I completely agree there are far too few people working on x-risks, and that there should be far more, and collapse is dangerous and scary, and that we are very much not out of the woods and things could go terribly. I suppose it is the nature of being scope-sensitive and prioritiationist though that something being very important and neglected and moderately tractable (like x-risk work) isnât always enough for it to be the âbestâ (granted re your previous post that this may not makes sense). Iâm not sure if this is what you had in mind, but I think there is some significance to risk-averse decision-making principles, where maybe avoiding extinction is especially important even compared to building (an even huger) utopia. So I think I have less clear views on what practically is best for people like me to be doing (for now I will continue to focus on catastrophic and existential risks). But I still think in principle it could be reasonable to focus on making a great future even larger and greater, even if that is unlikely. Another, perhaps tortured, analogy: you have founded a company, and could spend all your time trying to avoid going bankrupt and mitigating risks, but maybe some employee should spend some fraction of their time thinking about best-case scenarios and how you could massively expand and improve the company 5 years down the line if everything else falls into place nicely.
As a process note, I think these discussions are a lot easier and better to have when we are (I think) both confident the other person is well-meaning and thoughtful and altruistic, I think otherwise it would be a lot easier to dismiss prematurely ideas I disagree with or find uncomfortable. So in other words Iâm really glad I know you :)
First two points sound reasonable (and helpfully clarifying) to me!
I suppose it is the nature of being scope-sensitive and prioritiationist though that something being very important and neglected and moderately tractable (like x-risk work) isnât always enough for it to be the âbestâ
I share the guess that scope sensitivity and prioritarianism could be relevant here, as you clearly (I think) endorse these more strongly and more consistently than I do; but having thought about it for only 5-10 minutes, Iâm not sure Iâm able to exactly point at how these notions play into our intuitions and views on the topicâmaybe itâs something about me ignoring the [(super-high payoff of larger future)*(super-low probability of affecting whether there is a larger future) = (there is good reason to take this action)] calculation/âconclusion more readily?
That said, I fully agree that âsomething being very important and neglected and moderately tractable (like x-risk work) isnât always enough for it to be the âbestâ â. To figure out which option is best, weâd need to somehow compare their respective scores on importance, neglectedness, and tractability⌠Iâm not sure actually figuring that out is possible in practice, but I think itâs fair to challenge the claim that âaction X is best because it is very important and neglected and moderately tractableâ regardless. In spite of that, I continue to feel relatively confident in claiming that efforts to reduce x-risks are better (more desirable) than efforts to increase the probable size of the future, because the former is an unstable precondition for the latter (and because I strongly doubt the tractability and am at least confused about the desirability of the latter).
Another, perhaps tortured, analogy: you have founded a company, and could spend all your time trying to avoid going bankrupt and mitigating risks, but maybe some employee should spend some fraction of their time thinking about best-case scenarios and how you could massively expand and improve the company 5 years down the line if everything else falls into place nicely.
I think my stance on this example would depend on the present state of the company. If the company is in really dire straits, Iâm resource-constrained, and there are more things that need fixing now than I feel able to easily handle, I would seriously question whether one of my employees should go thinking about making best-case future scenarios the best they can be[1]. I would question this even more strongly if I thought that the world and my company (if it survives) will change so drastically in the next 5 years that the employee in question has very little chance of imaging and planning for the eventuality.
(I also notice while writing that a part of my disagreement here is motivated by values rather than logic/âempirics: part of my brain just rejects the objective of massively expanding and improving a company/âsituation that is already perfectly acceptable and satisfying. I donât know if I endorse this intuition for states of the world (I do endorse it pretty strongly for private life choices), but can imagine that the intuitive preference for satisficing informs/âshapes/âdirects my thinking on the topic at least a bitâsomething for myself to think about more, since this may or may not be a concerning bias.)
I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. [...] As a process note, I think these discussions are a lot easier and better to have when we are (I think) both confident the other person is well-meaning and thoughtful and altruistic, I think otherwise it would be a lot easier to dismiss prematurely ideas I disagree with or find uncomfortable.
(This is not to say that it might not make sense for one or a few individuals to think about the companyâs mid- to long-term success; I imagine that type of resource allocation will be quite sensible in most cases, because itâs not sustainable to preserve the company in a day-to-day survival strategy forever; but I think thatâs different from asking these individuals to paint a best-case future to be prepared to make a good outcome even better.)
That makes sense, yes perhaps there are some fanaticism worries re my make-the-future large approach even more so than x-risk work, and maybe I am less resistant to fanaticism-flavoured conclusions than you. That said I think not all work like this need be fanaticalâe.g. improving international cooperation and treaties for space exploration could be good in more frames (and bad is some frames you brought up, granted).
I donât know lots about it, but I wonder if you prefer more of a satisficing decision theory where we want to focus on getting a decent outcome rather than necessarily the best (e.g. Bostromâs âMaxipokâ rule). So I think not wholeheartedly going for maximum expected value isnât a sign of irrationality, and could reflect different, sound, decision approaches.
Thanks for this really thoughtful engagement! I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. Perhaps I failed to realise how controversial and provocative these ideas would be after playing with them myself and with a few relatively similar people. Onto the substance:
That makes sense to me that the analogy is a bit weak, I think I mostly agree. I think the strongest part of the analogy to me is less the NIMBYs themselves and more who is politically empowered (a smaller group that is better coordinatedâand actually existingâthan the larger group of possible beneficiaries). Maybe I should have foregrounded this more actually.
Re space expansion/âcolonisation, yeah I donât have much idea about how all this would work, so it is intuition-based. It is interesting I think how people have such different intuitive reactions to space expansion, I think roughly along the lines of pro-market, pro-âprogressâ, technologist, capitalist types (partially including me) pattern match space exploration to other things they like and intuitively like. Whereas environmentalists, localists, post-colonialists, social justice-oriented people, degrowthers etc (also partially including me, but to a lesser extent probably) are intuitively pretty opposed. But I think it is reasonable to at least be worried about the socio-political consequences of a space focusânot at all sure how it would play out and I am probably somewhat more optimistic than you, but yes your worries seem plausible.
I completely agree there are far too few people working on x-risks, and that there should be far more, and collapse is dangerous and scary, and that we are very much not out of the woods and things could go terribly. I suppose it is the nature of being scope-sensitive and prioritiationist though that something being very important and neglected and moderately tractable (like x-risk work) isnât always enough for it to be the âbestâ (granted re your previous post that this may not makes sense). Iâm not sure if this is what you had in mind, but I think there is some significance to risk-averse decision-making principles, where maybe avoiding extinction is especially important even compared to building (an even huger) utopia. So I think I have less clear views on what practically is best for people like me to be doing (for now I will continue to focus on catastrophic and existential risks). But I still think in principle it could be reasonable to focus on making a great future even larger and greater, even if that is unlikely. Another, perhaps tortured, analogy: you have founded a company, and could spend all your time trying to avoid going bankrupt and mitigating risks, but maybe some employee should spend some fraction of their time thinking about best-case scenarios and how you could massively expand and improve the company 5 years down the line if everything else falls into place nicely.
As a process note, I think these discussions are a lot easier and better to have when we are (I think) both confident the other person is well-meaning and thoughtful and altruistic, I think otherwise it would be a lot easier to dismiss prematurely ideas I disagree with or find uncomfortable. So in other words Iâm really glad I know you :)
First two points sound reasonable (and helpfully clarifying) to me!
I share the guess that scope sensitivity and prioritarianism could be relevant here, as you clearly (I think) endorse these more strongly and more consistently than I do; but having thought about it for only 5-10 minutes, Iâm not sure Iâm able to exactly point at how these notions play into our intuitions and views on the topicâmaybe itâs something about me ignoring the [(super-high payoff of larger future)*(super-low probability of affecting whether there is a larger future) = (there is good reason to take this action)] calculation/âconclusion more readily?
That said, I fully agree that âsomething being very important and neglected and moderately tractable (like x-risk work) isnât always enough for it to be the âbestâ â. To figure out which option is best, weâd need to somehow compare their respective scores on importance, neglectedness, and tractability⌠Iâm not sure actually figuring that out is possible in practice, but I think itâs fair to challenge the claim that âaction X is best because it is very important and neglected and moderately tractableâ regardless. In spite of that, I continue to feel relatively confident in claiming that efforts to reduce x-risks are better (more desirable) than efforts to increase the probable size of the future, because the former is an unstable precondition for the latter (and because I strongly doubt the tractability and am at least confused about the desirability of the latter).
I think my stance on this example would depend on the present state of the company. If the company is in really dire straits, Iâm resource-constrained, and there are more things that need fixing now than I feel able to easily handle, I would seriously question whether one of my employees should go thinking about making best-case future scenarios the best they can be[1]. I would question this even more strongly if I thought that the world and my company (if it survives) will change so drastically in the next 5 years that the employee in question has very little chance of imaging and planning for the eventuality.
(I also notice while writing that a part of my disagreement here is motivated by values rather than logic/âempirics: part of my brain just rejects the objective of massively expanding and improving a company/âsituation that is already perfectly acceptable and satisfying. I donât know if I endorse this intuition for states of the world (I do endorse it pretty strongly for private life choices), but can imagine that the intuitive preference for satisficing informs/âshapes/âdirects my thinking on the topic at least a bitâsomething for myself to think about more, since this may or may not be a concerning bias.)
+100 :)
(This is not to say that it might not make sense for one or a few individuals to think about the companyâs mid- to long-term success; I imagine that type of resource allocation will be quite sensible in most cases, because itâs not sustainable to preserve the company in a day-to-day survival strategy forever; but I think thatâs different from asking these individuals to paint a best-case future to be prepared to make a good outcome even better.)
That makes sense, yes perhaps there are some fanaticism worries re my make-the-future large approach even more so than x-risk work, and maybe I am less resistant to fanaticism-flavoured conclusions than you. That said I think not all work like this need be fanaticalâe.g. improving international cooperation and treaties for space exploration could be good in more frames (and bad is some frames you brought up, granted).
I donât know lots about it, but I wonder if you prefer more of a satisficing decision theory where we want to focus on getting a decent outcome rather than necessarily the best (e.g. Bostromâs âMaxipokâ rule). So I think not wholeheartedly going for maximum expected value isnât a sign of irrationality, and could reflect different, sound, decision approaches.