Thanks for the reply and I think you make a lot of good arguments, I’m not sure where I sit on this issue!
I found your last paragraph a little disturbing, because even given the truth of long termism, some of these ideas seem like they wouldn’t necessarily serve the present or the future particularly well. Would 100 percent of philosophies working on the question of the far future really be the best way to improve the field, with other important philosophical professions neglected? Widespread surveillance has already proven to be too unpalatable to most westerners and impractical even if it did prevent someone making a bioweapon. Personally I think even given longtermism, most people should be working on fixing existing issues (governance, suffering, climate change) as fixing things now will also get the future. Perhaps 100 to 1000x the current number off other working on longtermist causes would be ideal, but I think gearing the whole engine of the world towards longtermism might well be counterproductive.
“Virtually the whole machine learning community would be working on the technical problem of AI alignment. Governments would have departments for reducing existential risk / for future generations. 100% of philosophers would be working on the question “how can we best improve the far future”. We would save a lot more than we do and mitigate climate change far more than we do. We might even have widespread surveillance to ensure we don’t destroy ourselves (to be clear I am personally unsure if this would be required/desirable). We would have everyone on earth working together to improve the far future instead of what we have now—countries working against each other to come out on top.”
Hey, thanks for you comment! To be honest my addendum is a bit speculative and I haven’t thought about a huge amount. I think I may have been a little extreme and that factoring moral uncertainty would soften some of what I said.
Would 100 percent of philosophies working on the question of the far future really be the best way to improve the field, with other important philosophical professions neglected?
When you say “improve the field” I’m not sure what you mean. Personally I don’t think there is intrinsic value in philosophical progress, only instrumental value. It seems desirable for the philosophy field to reorient in a way that focuses on improving the world as much as possible, and that is likely to mean at least some fields entirely or nearly die out (e.g. aesthetics?, philosophy of religion?). I suspect a lot of fields would continue if we were to focus on improving the future though, as most of them have some useful role to play. The specific questions philosophers work on within those fields would change quite a bit though.
Widespread surveillance has already proven to be too unpalatable to most westerners and impractical even if it did prevent someone making a bioweapon.
I tried to express agnosticism on if this would be desirable and I am very sympathetic to arguments it wouldn’t be.
Personally I think even given longtermism, most people should be working on fixing existing issues (governance, suffering, climate change) as fixing things now will also get the future.
I did mention the importance of alleviating suffering and tackling climate change in my post. I’m not sure if we disagree as much as you think we do. Governance is a bit vague, but many forms of governance can easily be justified on longtermist grounds (as can climate change).
I think gearing the whole engine of the world towards longtermism might well be counterproductive
It is possible that “obsessing” about the far future is counterproductive. At that point we would be justified in obsessing less. However, we would be obsessing less on longtermist grounds.
Thanks for the reply and I think you make a lot of good arguments, I’m not sure where I sit on this issue!
I found your last paragraph a little disturbing, because even given the truth of long termism, some of these ideas seem like they wouldn’t necessarily serve the present or the future particularly well. Would 100 percent of philosophies working on the question of the far future really be the best way to improve the field, with other important philosophical professions neglected? Widespread surveillance has already proven to be too unpalatable to most westerners and impractical even if it did prevent someone making a bioweapon. Personally I think even given longtermism, most people should be working on fixing existing issues (governance, suffering, climate change) as fixing things now will also get the future. Perhaps 100 to 1000x the current number off other working on longtermist causes would be ideal, but I think gearing the whole engine of the world towards longtermism might well be counterproductive.
“Virtually the whole machine learning community would be working on the technical problem of AI alignment. Governments would have departments for reducing existential risk / for future generations. 100% of philosophers would be working on the question “how can we best improve the far future”. We would save a lot more than we do and mitigate climate change far more than we do. We might even have widespread surveillance to ensure we don’t destroy ourselves (to be clear I am personally unsure if this would be required/desirable). We would have everyone on earth working together to improve the far future instead of what we have now—countries working against each other to come out on top.”
Hey, thanks for you comment! To be honest my addendum is a bit speculative and I haven’t thought about a huge amount. I think I may have been a little extreme and that factoring moral uncertainty would soften some of what I said.
When you say “improve the field” I’m not sure what you mean. Personally I don’t think there is intrinsic value in philosophical progress, only instrumental value. It seems desirable for the philosophy field to reorient in a way that focuses on improving the world as much as possible, and that is likely to mean at least some fields entirely or nearly die out (e.g. aesthetics?, philosophy of religion?). I suspect a lot of fields would continue if we were to focus on improving the future though, as most of them have some useful role to play. The specific questions philosophers work on within those fields would change quite a bit though.
I tried to express agnosticism on if this would be desirable and I am very sympathetic to arguments it wouldn’t be.
I did mention the importance of alleviating suffering and tackling climate change in my post. I’m not sure if we disagree as much as you think we do. Governance is a bit vague, but many forms of governance can easily be justified on longtermist grounds (as can climate change).
It is possible that “obsessing” about the far future is counterproductive. At that point we would be justified in obsessing less. However, we would be obsessing less on longtermist grounds.