Expanding a bit on a comment I left on the google doc version of this: I broadly agree with your conclusion (longtermist ideas are harder to find now than in ~2017), but I don’t think this essay collection was a significant update towards that conclusion. As you mention as a hypothesis, my guess is that these essay collections mostly exist to legitimise discussing longtermism as part of serious academic research, rather than to disseminate important, plausible, and novel arguments. Coming up with an important, plausible, and novel argument which also meets the standards of academic publishing seems much harder than just making some publishable argument, so I didn’t really change my views on whether longtermist ideas are getting harder to find because of this collection’s relative lack of them. (With all the caveats you mentioned above, plus: I enjoyed many of the reprints, and think lots of incrementalist research can be very valuable —it’s just not the topic you’re discussing.)
I’m not sure how much we disagree, but I wanted to comment anyway, in case other people disagree with me and change my mind!
Relatedly, I think what I’ll call the “fundamental ideas” — of longtermism, AI existential risk, etc — are mildly overrated relative to further arguments about the state of the world right now, which make these action-guiding. For example, I think longtermism is a useful label to attach to a moral view, but you need further claims about reasons not to worry about cluelessness in at least some cases, and also potentially some claims about hinginess, for it to be very action-relevant. A second example: the “second species” worry about AIXR is very obvious, and only relevant given that we’re in a world where we’re plausibly close to developing TAI soon and, imo, because current AI development is weird and poorly understood; evidence from the real world is a potential defeater for this analogy.
I think you’re using “longtermist ideas” to also point at this category of work (fleshing out/adding the additional necessary arguments to big abstract ideas), but I do think there’s a common interpretation where “we need more longtermist ideas” translates to “we need more philosophy types to sit around and think at very high levels of abstraction”. Relative to this, I’m more into work that gets into the weeds a bit more.
Good point, yes I think empirical findings that have a large bearing on what longtermists should be doing would also count for me, and yes perhaps empirical work is still easier to come up with new important considerations in.
Glad you shared this!
Expanding a bit on a comment I left on the google doc version of this: I broadly agree with your conclusion (longtermist ideas are harder to find now than in ~2017), but I don’t think this essay collection was a significant update towards that conclusion. As you mention as a hypothesis, my guess is that these essay collections mostly exist to legitimise discussing longtermism as part of serious academic research, rather than to disseminate important, plausible, and novel arguments. Coming up with an important, plausible, and novel argument which also meets the standards of academic publishing seems much harder than just making some publishable argument, so I didn’t really change my views on whether longtermist ideas are getting harder to find because of this collection’s relative lack of them. (With all the caveats you mentioned above, plus: I enjoyed many of the reprints, and think lots of incrementalist research can be very valuable —it’s just not the topic you’re discussing.)
I’m not sure how much we disagree, but I wanted to comment anyway, in case other people disagree with me and change my mind!
Relatedly, I think what I’ll call the “fundamental ideas” — of longtermism, AI existential risk, etc — are mildly overrated relative to further arguments about the state of the world right now, which make these action-guiding. For example, I think longtermism is a useful label to attach to a moral view, but you need further claims about reasons not to worry about cluelessness in at least some cases, and also potentially some claims about hinginess, for it to be very action-relevant. A second example: the “second species” worry about AIXR is very obvious, and only relevant given that we’re in a world where we’re plausibly close to developing TAI soon and, imo, because current AI development is weird and poorly understood; evidence from the real world is a potential defeater for this analogy.
I think you’re using “longtermist ideas” to also point at this category of work (fleshing out/adding the additional necessary arguments to big abstract ideas), but I do think there’s a common interpretation where “we need more longtermist ideas” translates to “we need more philosophy types to sit around and think at very high levels of abstraction”. Relative to this, I’m more into work that gets into the weeds a bit more.
Good point, yes I think empirical findings that have a large bearing on what longtermists should be doing would also count for me, and yes perhaps empirical work is still easier to come up with new important considerations in.