Thanks for sharing this, Zoe!
I think your piece is valuable as a summary of weaknesses in existing longtermist thinking, though I don’t agree with all your points or the ways you frame them.
Things that would make me excited to read future work, and IMO would make that work stronger:
Providing more concrete suggestions for improvement. Criticism is valuable, but I’m aware of many of the weaknesses of our frameworks; what I’m really hungry for is further work on solving them. This probably requires focusing down to specific areas, rather than casting a wide net as you did for this summary paper.
Engaging with the nuances of longtermist thinking on these subjects. For example, when you mention the importance of risk-factor assessment, I don’t see much engagement with e.g. the risk factor / threat / vulnerability model, or with the paper on defense in depth against AI risk. Neither of these models are perfect, but I expect they both have useful things to offer.
I expect this links up with the above point. Starting from a viewpoint of what-can-I-build encourages finding the strong points of prior work, rather than the weak points you focused on in this piece.
With regard to harshness, I think part of the reason you get different responses is because you’re writing in the genre of the academic paper. Since authors have to write in a particular formal style, it’s ambiguous whether they intend a value judgment. Often authors do want readers to come away with a particular view, so it’s not crazy to read their judgments into the text, but different readers will draw different conclusions about what you want them to feel or believe.
For example:
As with many points in your paper, this is literally true, and I appreciate you raising awareness of it! In a different context, I might read this as basically a value-neutral call to arms. Given the context, it’s easy to read into it some amount of value judgment around longtermism and longtermists.