Yeah, I think we have a substantive disagreement. My impression before and after reading your list above is that you think that being convinced of longtermism is not very important for doing work that is stellar according to “longtermism” and it’s relatively easy to convince people that x-risk/AIS/whatever is important.
I agree with the literal claim, but think that empirically longtermists represent the bulk of people that concern themeselves with thinking clearly about how wild the future could be. I don’t think all longtermists do this, but longtermism empirically seems to provide a strong motivation for trying to think about how wild the future could be at all. [1] I also believe that thinking clearly about how wild the future could be is an important and often counterfactual trait for doing AIS work that I expect to actually be useful (though it’s obviously not necessary in every case). Lots of work in the name of AIS is done by non-longtermists (which is great), but at the object level, I often feel their work could have been much more impactful if they tried to think more concretely about wild AI scenarios. Ik that longermism is not about AI, and most longtermists are not actually working on AI.
So, for me the dominant question is does more longtermism writing increase or decrease the supply of people trying to think clearly about the future. Overall, I’m like … weakly increases (?) and there aren’t many other leveraged interventions for getting people think about the future.
I would be much more excited about competitions like: 1. Write branches of the AI 2027 forecast from wherever you disagree (which could be at the start). 2. Argue for features of a pre-IE society that can navigate the IE well and roadmap how we might get more of that feature or think about critical R&D challenges for navigating an IE well.
etc.
Also, somewhat unrelated to the above, but I suspect that where “philosophy” starts for me might be lower abstraction than where it starts for you. I would include things like Paul writing about what a good successor would look like, Ryan writing about why rogue AI may not kill literally everyone, etc., as “philosophy”, though I’m not arguing that either of those specific discussions is particularly important.
P.S. fwiw I don’t think the writing style in this post was particularly poor, or that you came across as grumpy
I guess there are some non-longtermist bay area people trying to do this, but I feel like most of them don’t then take very thoughtful or altruistic actions.
Yeah, I think we have a substantive disagreement. My impression before and after reading your list above is that you think that being convinced of longtermism is not very important for doing work that is stellar according to “longtermism” and it’s relatively easy to convince people that x-risk/AIS/whatever is important.
I agree with the literal claim, but think that empirically longtermists represent the bulk of people that concern themeselves with thinking clearly about how wild the future could be. I don’t think all longtermists do this, but longtermism empirically seems to provide a strong motivation for trying to think about how wild the future could be at all.
[1]
I also believe that thinking clearly about how wild the future could be is an important and often counterfactual trait for doing AIS work that I expect to actually be useful (though it’s obviously not necessary in every case). Lots of work in the name of AIS is done by non-longtermists (which is great), but at the object level, I often feel their work could have been much more impactful if they tried to think more concretely about wild AI scenarios. Ik that longermism is not about AI, and most longtermists are not actually working on AI.
So, for me the dominant question is does more longtermism writing increase or decrease the supply of people trying to think clearly about the future. Overall, I’m like … weakly increases (?) and there aren’t many other leveraged interventions for getting people think about the future.
I would be much more excited about competitions like:
1. Write branches of the AI 2027 forecast from wherever you disagree (which could be at the start).
2. Argue for features of a pre-IE society that can navigate the IE well and roadmap how we might get more of that feature or think about critical R&D challenges for navigating an IE well.
etc.
Also, somewhat unrelated to the above, but I suspect that where “philosophy” starts for me might be lower abstraction than where it starts for you. I would include things like Paul writing about what a good successor would look like, Ryan writing about why rogue AI may not kill literally everyone, etc., as “philosophy”, though I’m not arguing that either of those specific discussions is particularly important.
P.S. fwiw I don’t think the writing style in this post was particularly poor, or that you came across as grumpy
I guess there are some non-longtermist bay area people trying to do this, but I feel like most of them don’t then take very thoughtful or altruistic actions.