I’ve tried to spell out my position more clearly, so we can see if/where we disagree. I think:
Most discussion of longtermism, on the level of generality/abstraction of “is longtermism true?”, “does X moral viewpoint support longtermism?”, “should longtermists care about cause area X?” is not particularly useful, and is currently oversupplied.
Similarly, discussions on the level of abstraction of “acausal trade is a thing longtermists should think about” are rarely useful.
I agree that concrete discussions aimed at “should we take action on X” are fairly useful. I’m a bit worried that anchoring too hard on longtermism lends itself to discussing philosophy, and especially discussing philosophy on the level of “what axiological claims are true”, which I think is an unproductive frame. (And even if you’re very interested in the philosophical “meat” of longtermism, I claim all the action is in “ok but how much should this affect our actions, and which actions?”, which is mostly a question about the world and our epistemics, not about ethics.)
“though I’d be like 50x more excited about Forethought + Redwood running a similar competition on things they think are important that are still very philosophy-ish/high level.” —this is helpful to know! I would not be excited about this, so we disagree at least here :)
“The track record of talking about longtermism seems very strong” —yeah, agree longtermism has had motivational force for many people, and also does strengthen the case for lots of e.g. AI safety work. I don’t know how much weight to put on this; it seems kinda plausible to me that talking about longtermism might’ve alienated a bunch of less philosophy-inclined but still hardcore, kickass people who would’ve done useful altruistic work on AIS, etc. (Tbc, that’s not my mainline guess; I just think it’s more like 10-40% likely than e.g. 1-4%.)
“I feel like this post is more about “is convincing people to be longermists important” or should we just care about x-risk/AI/bio/etc.” This is fair! I think it’s ~both, and also, I wrote it poorly. (Writing from being grumpy about the essay contest was probably a poor frame.) I am also trying to make a (hotter?) claim about how useful thinking in these abstract frames is, as well as a point on (for want of a better word) PR/reputation/messaging. (And I’m more interested in the first point.)
Yeah, I think we have a substantive disagreement. My impression before and after reading your list above is that you think that being convinced of longtermism is not very important for doing work that is stellar according to “longtermism” and it’s relatively easy to convince people that x-risk/AIS/whatever is important.
I agree with the literal claim, but think that empirically longtermists represent the bulk of people that concern themeselves with thinking clearly about how wild the future could be. I don’t think all longtermists do this, but longtermism empirically seems to provide a strong motivation for trying to think about how wild the future could be at all. [1] I also believe that thinking clearly about how wild the future could be is an important and often counterfactual trait for doing AIS work that I expect to actually be useful (though it’s obviously not necessary in every case). Lots of work in the name of AIS is done by non-longtermists (which is great), but at the object level, I often feel their work could have been much more impactful if they tried to think more concretely about wild AI scenarios. Ik that longermism is not about AI, and most longtermists are not actually working on AI.
So, for me the dominant question is does more longtermism writing increase or decrease the supply of people trying to think clearly about the future. Overall, I’m like … weakly increases (?) and there aren’t many other leveraged interventions for getting people think about the future.
I would be much more excited about competitions like: 1. Write branches of the AI 2027 forecast from wherever you disagree (which could be at the start). 2. Argue for features of a pre-IE society that can navigate the IE well and roadmap how we might get more of that feature or think about critical R&D challenges for navigating an IE well.
etc.
Also, somewhat unrelated to the above, but I suspect that where “philosophy” starts for me might be lower abstraction than where it starts for you. I would include things like Paul writing about what a good successor would look like, Ryan writing about why rogue AI may not kill literally everyone, etc., as “philosophy”, though I’m not arguing that either of those specific discussions is particularly important.
P.S. fwiw I don’t think the writing style in this post was particularly poor, or that you came across as grumpy
I guess there are some non-longtermist bay area people trying to do this, but I feel like most of them don’t then take very thoughtful or altruistic actions.
Thanks for commenting!
I’ve tried to spell out my position more clearly, so we can see if/where we disagree. I think:
Most discussion of longtermism, on the level of generality/abstraction of “is longtermism true?”, “does X moral viewpoint support longtermism?”, “should longtermists care about cause area X?” is not particularly useful, and is currently oversupplied.
Similarly, discussions on the level of abstraction of “acausal trade is a thing longtermists should think about” are rarely useful.
I agree that concrete discussions aimed at “should we take action on X” are fairly useful. I’m a bit worried that anchoring too hard on longtermism lends itself to discussing philosophy, and especially discussing philosophy on the level of “what axiological claims are true”, which I think is an unproductive frame. (And even if you’re very interested in the philosophical “meat” of longtermism, I claim all the action is in “ok but how much should this affect our actions, and which actions?”, which is mostly a question about the world and our epistemics, not about ethics.)
“though I’d be like 50x more excited about Forethought + Redwood running a similar competition on things they think are important that are still very philosophy-ish/high level.” —this is helpful to know! I would not be excited about this, so we disagree at least here :)
“The track record of talking about longtermism seems very strong” —yeah, agree longtermism has had motivational force for many people, and also does strengthen the case for lots of e.g. AI safety work. I don’t know how much weight to put on this; it seems kinda plausible to me that talking about longtermism might’ve alienated a bunch of less philosophy-inclined but still hardcore, kickass people who would’ve done useful altruistic work on AIS, etc. (Tbc, that’s not my mainline guess; I just think it’s more like 10-40% likely than e.g. 1-4%.)
“I feel like this post is more about “is convincing people to be longermists important” or should we just care about x-risk/AI/bio/etc.” This is fair! I think it’s ~both, and also, I wrote it poorly. (Writing from being grumpy about the essay contest was probably a poor frame.) I am also trying to make a (hotter?) claim about how useful thinking in these abstract frames is, as well as a point on (for want of a better word) PR/reputation/messaging. (And I’m more interested in the first point.)
Yeah, I think we have a substantive disagreement. My impression before and after reading your list above is that you think that being convinced of longtermism is not very important for doing work that is stellar according to “longtermism” and it’s relatively easy to convince people that x-risk/AIS/whatever is important.
I agree with the literal claim, but think that empirically longtermists represent the bulk of people that concern themeselves with thinking clearly about how wild the future could be. I don’t think all longtermists do this, but longtermism empirically seems to provide a strong motivation for trying to think about how wild the future could be at all.
[1]
I also believe that thinking clearly about how wild the future could be is an important and often counterfactual trait for doing AIS work that I expect to actually be useful (though it’s obviously not necessary in every case). Lots of work in the name of AIS is done by non-longtermists (which is great), but at the object level, I often feel their work could have been much more impactful if they tried to think more concretely about wild AI scenarios. Ik that longermism is not about AI, and most longtermists are not actually working on AI.
So, for me the dominant question is does more longtermism writing increase or decrease the supply of people trying to think clearly about the future. Overall, I’m like … weakly increases (?) and there aren’t many other leveraged interventions for getting people think about the future.
I would be much more excited about competitions like:
1. Write branches of the AI 2027 forecast from wherever you disagree (which could be at the start).
2. Argue for features of a pre-IE society that can navigate the IE well and roadmap how we might get more of that feature or think about critical R&D challenges for navigating an IE well.
etc.
Also, somewhat unrelated to the above, but I suspect that where “philosophy” starts for me might be lower abstraction than where it starts for you. I would include things like Paul writing about what a good successor would look like, Ryan writing about why rogue AI may not kill literally everyone, etc., as “philosophy”, though I’m not arguing that either of those specific discussions is particularly important.
P.S. fwiw I don’t think the writing style in this post was particularly poor, or that you came across as grumpy
I guess there are some non-longtermist bay area people trying to do this, but I feel like most of them don’t then take very thoughtful or altruistic actions.