The s-risk people I’m familiar with are mostly interested in worst-case s-risk scenarios that involve vast populations of sentient beings over vast time periods. It’s hard to form estimates for the scale of such scenarios, and so the importance is difficult to grasp. I don’t think estimating the cost-effectiveness of working on these s-risks would be as simple as measuring in suffering-units instead of QALYs.
Tobias Baumann, for example, mentions in his book and a recent podcast that possibly the most important s-risk work we can do now is simply preparing to be ready in some future time when we will actually be able to do useful stuff. That includes things like “improving institutional decision-making” and probably also moral circle expansion work like curtailing factory farming.
I think Baumann also said somewhere that he can be reluctant to mention specific scenarios too much because it may lead to complacent feeling that we have dealt with the threats: in reality, the greatest s-risk danger is probably something we don’t even know about yet.
I hope the above is a fair representation of Baumann’s and others’ views. I mostly agree with them, although it is a bit shady not to be able to specify what the greatest concerns are.
I could do a very basic cause-area sense-check of the form:
The greatest s-risks involve huge populations
SO
They probably occur in an interstellar civilisation
AND
Are likely to involve artificial minds (which could probably exist at a far greater density than people)
HENCE
Work on avoiding the worst s-risks is likely to involve influencing whether/how we become a spacefaring civilisation and whether/how we develop and use sentient minds.
The s-risk people I’m familiar with are mostly interested in worst-case s-risk scenarios that involve vast populations of sentient beings over vast time periods. It’s hard to form estimates for the scale of such scenarios, and so the importance is difficult to grasp. I don’t think estimating the cost-effectiveness of working on these s-risks would be as simple as measuring in suffering-units instead of QALYs.
Tobias Baumann, for example, mentions in his book and a recent podcast that possibly the most important s-risk work we can do now is simply preparing to be ready in some future time when we will actually be able to do useful stuff. That includes things like “improving institutional decision-making” and probably also moral circle expansion work like curtailing factory farming.
I think Baumann also said somewhere that he can be reluctant to mention specific scenarios too much because it may lead to complacent feeling that we have dealt with the threats: in reality, the greatest s-risk danger is probably something we don’t even know about yet.
I hope the above is a fair representation of Baumann’s and others’ views. I mostly agree with them, although it is a bit shady not to be able to specify what the greatest concerns are.
I could do a very basic cause-area sense-check of the form:
The greatest s-risks involve huge populations
SO
They probably occur in an interstellar civilisation
AND
Are likely to involve artificial minds (which could probably exist at a far greater density than people)
HENCE
Work on avoiding the worst s-risks is likely to involve influencing whether/how we become a spacefaring civilisation and whether/how we develop and use sentient minds.