With regard to harshness, I think part of the reason you get different responses is because you’re writing in the genre of the academic paper. Since authors have to write in a particular formal style, it’s ambiguous whether they intend a value judgment. Often authors do want readers to come away with a particular view, so it’s not crazy to read their judgments into the text, but different readers will draw different conclusions about what you want them to feel or believe.
For example:
Under the TUA, an existential risk is understood as one with the potential to cause human
extinction directly or lead us to fail to reach our future potential, expected value, or
technological maturity. This means that what is classified as a prioritised “risk” depends on a
threat model that involves considerable speculation about the mechanisms which can result in the death of all humans, their respective likelihoods, and a speculative and morally loaded
assessment of what might constitute our inability to reach our potential.[...]
A risk perception that depends so strongly on speculation and yet-to-be-verified assumptions will inevitably (to varying degrees) be an expression of researchers’ personal preferences, biases, and imagination. If collective resources (such as research funding and public attention) are to be allocated to the highest priority risk, then ERS should attempt to find a more evidence-based, replicable prioritisation procedure.
As with many points in your paper, this is literally true, and I appreciate you raising awareness of it! In a different context, I might read this as basically a value-neutral call to arms. Given the context, it’s easy to read into it some amount of value judgment around longtermism and longtermists.
Thanks for sharing this, Zoe!
I think your piece is valuable as a summary of weaknesses in existing longtermist thinking, though I don’t agree with all your points or the ways you frame them.
Things that would make me excited to read future work, and IMO would make that work stronger:
Providing more concrete suggestions for improvement. Criticism is valuable, but I’m aware of many of the weaknesses of our frameworks; what I’m really hungry for is further work on solving them. This probably requires focusing down to specific areas, rather than casting a wide net as you did for this summary paper.
Engaging with the nuances of longtermist thinking on these subjects. For example, when you mention the importance of risk-factor assessment, I don’t see much engagement with e.g. the risk factor / threat / vulnerability model, or with the paper on defense in depth against AI risk. Neither of these models are perfect, but I expect they both have useful things to offer.
I expect this links up with the above point. Starting from a viewpoint of what-can-I-build encourages finding the strong points of prior work, rather than the weak points you focused on in this piece.