As an update, I am working on a full post that will excerpt 20 arguments against working to improve the long-term future and/or working to reduce existential risk as well as responses to those arguments. The post itself is currently at 26,000 words and there are six planned comments (one of which will add 10 additional arguments) that together are currently at 11,000 words. There have been various delays in my writing process but I now think that is good because there have been several new and important arguments that have been developed in the past year. My goal is to begin circulating the draft for feedback within three months.
Judging from the comment, I expect the post to be a very valuable summary of existing arguments against longtermism, and am looking forward to reading it. One request: as Jesse Clifton notes, some of the arguments you list apply only to x-risk (a narrower focus than longtermism), and some apply only to AI risk (a narrower focus than x-risk). It would be great if your post could highlight the scope of each argument.
Strongly agree—I think it’s really important to disentangle longtermism from existential risk from AI safety. I might suggest writing separate posts.
I’d also be keen to see more focus on which arguments seem best, rather than having such a long list (including many that have a strong counter, or are no longer supported by the people who first suggested them), though I appreciate that might take longer to write. A quick fix would be to link to counterarguments where they exist.
Thanks Pablo and Ben. I already have tags below each argument for what I think it is arguing against. I do not plan on doing two separate posts as there are some arguments that are against longtermism and against the longtermist case for working to reduce existential risk. Each argument and its response are presented comprehensively, so the amount of space dedicated to each is based mostly on the amount of existing literature. And as noted in my comment above, I am excerpting responses to the arguments presented.
FWIW I’d still favour two posts (or if you were only going to one, focusing on longtermism). I took a quick look at the original list, and I think they divide up pretty well, so you wouldn’t end up with many reasons that should appear on both lists. I also think it would be fine to have some arguments appear on both lists.
In general, I think conflating the case for existential risk with the case for longtermism has caused a lot of confusion, and it’s really worth pushing against.
For instance, many arguments that undermine existential risk actually imply we should focus on (i) investing & capacity building (ii) global priorities research or (iii) other ways to improve the future, but instead get understood as arguments for working on global health.
Thanks Ben. There is actually at least one argument in the draft for each alternative you named. To be honest, I don’t think you can get a good sense of my 26,000 word draft from my 570 word comment from two years ago. I’ll send you my draft when I’m done, but until then, I don’t think it’s productive for us to go back and forth like this.
As an update, I am working on a full post that will excerpt 20 arguments against working to improve the long-term future and/or working to reduce existential risk as well as responses to those arguments. The post itself is currently at 26,000 words and there are six planned comments (one of which will add 10 additional arguments) that together are currently at 11,000 words. There have been various delays in my writing process but I now think that is good because there have been several new and important arguments that have been developed in the past year. My goal is to begin circulating the draft for feedback within three months.
Judging from the comment, I expect the post to be a very valuable summary of existing arguments against longtermism, and am looking forward to reading it. One request: as Jesse Clifton notes, some of the arguments you list apply only to x-risk (a narrower focus than longtermism), and some apply only to AI risk (a narrower focus than x-risk). It would be great if your post could highlight the scope of each argument.
Strongly agree—I think it’s really important to disentangle longtermism from existential risk from AI safety. I might suggest writing separate posts.
I’d also be keen to see more focus on which arguments seem best, rather than having such a long list (including many that have a strong counter, or are no longer supported by the people who first suggested them), though I appreciate that might take longer to write. A quick fix would be to link to counterarguments where they exist.
Thanks Pablo and Ben. I already have tags below each argument for what I think it is arguing against. I do not plan on doing two separate posts as there are some arguments that are against longtermism and against the longtermist case for working to reduce existential risk. Each argument and its response are presented comprehensively, so the amount of space dedicated to each is based mostly on the amount of existing literature. And as noted in my comment above, I am excerpting responses to the arguments presented.
FWIW I’d still favour two posts (or if you were only going to one, focusing on longtermism). I took a quick look at the original list, and I think they divide up pretty well, so you wouldn’t end up with many reasons that should appear on both lists. I also think it would be fine to have some arguments appear on both lists.
In general, I think conflating the case for existential risk with the case for longtermism has caused a lot of confusion, and it’s really worth pushing against.
For instance, many arguments that undermine existential risk actually imply we should focus on (i) investing & capacity building (ii) global priorities research or (iii) other ways to improve the future, but instead get understood as arguments for working on global health.
Thanks Ben. There is actually at least one argument in the draft for each alternative you named. To be honest, I don’t think you can get a good sense of my 26,000 word draft from my 570 word comment from two years ago. I’ll send you my draft when I’m done, but until then, I don’t think it’s productive for us to go back and forth like this.
Any updates on how this post is going? I’m really curious to see a draft!
+1!
While I have made substantial progress on the draft, it is still not ready to be circulated for feedback.
I have shared the draft with Aaron Gertler to show that it is a genuine work in progress.
I’ve completed my draft (now at 47,620 words)!
I’ve shared it via the EA Forum share feature with a number of GPI, FHI, and CLR people who have EA Forum accounts.
I’m sharing it in stages to limit the number of people who have to point out the same issue to me.
that sounds fantastic. I’d love to read the draft once it is circulated for feedback