I’m going to struggle to cast a meaningful vote on this, since I find ‘existential risk’ terminology as used in the OP more confusing than helpful, since e.g. it includes nonexistential considerations and in practice excludes non-extinction catastrophes from a discussion they should very much be in, in favour of work on the heuristical-but-insufficient grounds of focusing on events that have maximal extinction probability (i.e. AI).
I’ve argued here that non-extinction catastrophes could be as or more valuable to work on than immediate extinction events, even if all we care about is the probability of very long-term survival. For this reason I actually find Scott’s linked post extremely misleading, since it frames his priorities as ‘existential’ risk, then pushes people entirely towards working on extinction risk—and gives reasons that would apply as well to non-extinction GCRs. I gave some alternate terminology here, and while I don’t want to insist on my own clunky suggestions, I wish serious discussions would be more precise.
It’s difficult if the format requires a 1D sliding scale. I think reasonable positions can be opposed on AI vs other GCRs vs infrastructure vs evidenced interventions, and future (if it exists) is default bad vs future is default good, and perhaps ‘future generations should be morally discounted’ vs not.
Ah yes I agree that the 1 dimensional slider doesn’t represent anyone’s entire opinion. But I also think it shouldn’t—this is why debate week is also a week for writing posts, and we integrate comments with the banner. There are many considerations that could affect your vote, and that’s great—that’s (hopefully) why the week will be generative.
I’m going to struggle to cast a meaningful vote on this, since I find ‘existential risk’ terminology as used in the OP more confusing than helpful, since e.g. it includes nonexistential considerations and in practice excludes non-extinction catastrophes from a discussion they should very much be in, in favour of work on the heuristical-but-insufficient grounds of focusing on events that have maximal extinction probability (i.e. AI).
I’ve argued here that non-extinction catastrophes could be as or more valuable to work on than immediate extinction events, even if all we care about is the probability of very long-term survival. For this reason I actually find Scott’s linked post extremely misleading, since it frames his priorities as ‘existential’ risk, then pushes people entirely towards working on extinction risk—and gives reasons that would apply as well to non-extinction GCRs. I gave some alternate terminology here, and while I don’t want to insist on my own clunky suggestions, I wish serious discussions would be more precise.
Thanks! Any suggestions for making the clarification of extinction in this post more precise (while still being explainable)?
It’s difficult if the format requires a 1D sliding scale. I think reasonable positions can be opposed on AI vs other GCRs vs infrastructure vs evidenced interventions, and future (if it exists) is default bad vs future is default good, and perhaps ‘future generations should be morally discounted’ vs not.
Ah yes I agree that the 1 dimensional slider doesn’t represent anyone’s entire opinion. But I also think it shouldn’t—this is why debate week is also a week for writing posts, and we integrate comments with the banner. There are many considerations that could affect your vote, and that’s great—that’s (hopefully) why the week will be generative.