To be fair to Richard, there is a difference between a) stating your own personal probability in time of perils and b) making clear that for long-termist arguments to fail solely because they rely on time of perils, you need it to have extremely low probability, not just low, at least if you accept the expected value theory and subjective probability estimates can legitimately be applied at all here, as you seemed to be doing for the sake of making an internal critique. I took it to be the latter that Richard was complaining your paper doesnât do.
How strong do you think your evidence is for most readers of philosophy papers think the claim that X-risk is currently high, but will go permanently very lowâ is extremely implausible? If you asked me to guess Iâd say most peopleâs reaction would be more like âIâve no idea how plausible this is, other than definitely quite unlikelyâ, which is very different, but I have no experience with reviewers here.
I am a bit -not necessarily entirely-skeptical of the âeveryone really knows EA work outside development and animal welfare is trashâ vibe of your post. I donât doubt a lot of people do think that in professional philosophy. But at the same time, Nick Bostrom is more highly cited than virtually any reviewer you will have encountered. Long-termist moral philosophy turns up in leading journals constantly. One of the people you critiqued in your very good paper attacking arguments for the singularity is Dave Chalmers, and you literally donât get more professionally distinguished in analytic philosophy than Dave. Your stuff criticizing long-termism seems to have made it into top journals too when I checked, which indicates there certainly are people who think it is not too silly to be worth refuting: https://ââwww.dthorstad.com/ââpapers
To be fair to Richard, there is a difference between a) stating your own personal probability in time of perils and b) making clear that for long-termist arguments to fail solely because they rely on time of perils, you need it to have extremely low probability, not just low, at least if you accept the expected value theory and subjective probability estimates can legitimately be applied at all here, as you seemed to be doing for the sake of making an internal critique. I took it to be the latter that Richard was complaining your paper doesnât do.
How strong do you think your evidence is for most readers of philosophy papers think the claim that X-risk is currently high, but will go permanently very lowâ is extremely implausible? If you asked me to guess Iâd say most peopleâs reaction would be more like âIâve no idea how plausible this is, other than definitely quite unlikelyâ, which is very different, but I have no experience with reviewers here.
I am a bit -not necessarily entirely-skeptical of the âeveryone really knows EA work outside development and animal welfare is trashâ vibe of your post. I donât doubt a lot of people do think that in professional philosophy. But at the same time, Nick Bostrom is more highly cited than virtually any reviewer you will have encountered. Long-termist moral philosophy turns up in leading journals constantly. One of the people you critiqued in your very good paper attacking arguments for the singularity is Dave Chalmers, and you literally donât get more professionally distinguished in analytic philosophy than Dave. Your stuff criticizing long-termism seems to have made it into top journals too when I checked, which indicates there certainly are people who think it is not too silly to be worth refuting: https://ââwww.dthorstad.com/ââpapers