Thanks for writing this post! Iām a fan of your work and am excited for this discussion.
Hereās how I think about costs vs benefits:
I think XR reduction is at least 1000x as bad as a GCR that was guaranteed not to turn into an x-risk. The future is very long, and humanity seems able to achieve a very good one, but looks currently very vulnerable to me.
I think I can have a tractable impact on reducing that vulnerability. It doesnāt seem to me that my impact on human progress would equal my chance of saving it. Obviously that needs some fleshing out ā what is my impact on x-risk, what is my impact on progress, how likely am I to have those impacts, etc. But thatās the structure of how I think about it.
After initially worrying about pascalās mugging, Iāve come to believe that x-risk is in fact substantially more likely than 1 in several million, and whatever objections I might have to pascalās mugging donāt really apply.
How I think about tech progress:
From an x-risk perspective, Iām pretty ambivalent about tech progress. Iāve heard arguments that itās good, and that itās bad, but mostly I think itās not a very predictably-large effect on the margin.
But while I care a lot about x-risk reduction, I have different world-views that I put substantial credence in as well. And basically all of those other world-views care a whole lot about human progress. So while I donāt view human progress as the cause of my life the way I do x-risk reduction, Iām strongly in favor of more of it.
Finally, as you can imagine from my last answer, I definitely have a lot of conversations where I try to convey my optimism about technologyās ability to make lives better. And I think thatās pretty common ā your blog is well-read in my circles.
Minor note: the āPascalās Muggingā isnāt about the chance of x-risk itself, but rather the delta you can achieve through any particular program/āaction (vs. the cost of that choice).
By that token most particular scientific experiments or contributions to political efforts may be such: e.g. if there is a referendum to pass a pro-innovation regulatory reform and science funding package, a given donation or staffer in support of it is very unlikely to counterfactually tip it into passing, although the expected value and average returns could be high, and the collective effort has a large chance of success.
Sure, but the delta you can achieve with anything is small, depending on how you delineate an action. True, x-risk reduction is on the more extreme end of this spectrum, but I think the question should be ācan these small deltas/āchanges add up to a big delta/āchange? (vs. the cost of that choice)ā and the answer to that seems to be āyes.ā
Is your issue more along the following?
Humans are bad at estimating very small percentages accurately, and can be orders of magnitudes off (and the same goes for astronomical values in the long-term future)
Arguments for the cost-effectiveness of x-risk reduction rely on estimating very small percentages (and the same goes for astronomical values in the long-term future)
(Conlusion) Arguments for the cost-effectiveness of x-risk reduction cannot be trusted.
If so, I would reject 2, because I believe we shouldnāt try to quantify things at those levels of precision. This does get us to your question āHow does XR weigh costs and benefits?ā, which I think is a good question to which I donāt have a great answer to. It would be something along the lines of āthereās a grey area where I donāt know how to make those tradeoffs, but most things do not fall into the grey area so Iām not worrying too much about this. If I wouldnāt fund something that supposedly reduces x-risk, itās either that I think it might increase x-risk, or because I think there are better options available for me to fundā. Do you believe that many more choices fall into that grey area?
Thanks for writing this post! Iām a fan of your work and am excited for this discussion.
Hereās how I think about costs vs benefits:
I think XR reduction is at least 1000x as bad as a GCR that was guaranteed not to turn into an x-risk. The future is very long, and humanity seems able to achieve a very good one, but looks currently very vulnerable to me.
I think I can have a tractable impact on reducing that vulnerability. It doesnāt seem to me that my impact on human progress would equal my chance of saving it. Obviously that needs some fleshing out ā what is my impact on x-risk, what is my impact on progress, how likely am I to have those impacts, etc. But thatās the structure of how I think about it.
After initially worrying about pascalās mugging, Iāve come to believe that x-risk is in fact substantially more likely than 1 in several million, and whatever objections I might have to pascalās mugging donāt really apply.
How I think about tech progress:
From an x-risk perspective, Iām pretty ambivalent about tech progress. Iāve heard arguments that itās good, and that itās bad, but mostly I think itās not a very predictably-large effect on the margin.
But while I care a lot about x-risk reduction, I have different world-views that I put substantial credence in as well. And basically all of those other world-views care a whole lot about human progress. So while I donāt view human progress as the cause of my life the way I do x-risk reduction, Iām strongly in favor of more of it.
Finally, as you can imagine from my last answer, I definitely have a lot of conversations where I try to convey my optimism about technologyās ability to make lives better. And I think thatās pretty common ā your blog is well-read in my circles.
Thanks JP!
Minor note: the āPascalās Muggingā isnāt about the chance of x-risk itself, but rather the delta you can achieve through any particular program/āaction (vs. the cost of that choice).
By that token most particular scientific experiments or contributions to political efforts may be such: e.g. if there is a referendum to pass a pro-innovation regulatory reform and science funding package, a given donation or staffer in support of it is very unlikely to counterfactually tip it into passing, although the expected value and average returns could be high, and the collective effort has a large chance of success.
Sure, but the delta you can achieve with anything is small, depending on how you delineate an action. True, x-risk reduction is on the more extreme end of this spectrum, but I think the question should be ācan these small deltas/āchanges add up to a big delta/āchange? (vs. the cost of that choice)ā and the answer to that seems to be āyes.ā
Is your issue more along the following?
Humans are bad at estimating very small percentages accurately, and can be orders of magnitudes off (and the same goes for astronomical values in the long-term future)
Arguments for the cost-effectiveness of x-risk reduction rely on estimating very small percentages (and the same goes for astronomical values in the long-term future)
(Conlusion) Arguments for the cost-effectiveness of x-risk reduction cannot be trusted.
If so, I would reject 2, because I believe we shouldnāt try to quantify things at those levels of precision. This does get us to your question āHow does XR weigh costs and benefits?ā, which I think is a good question to which I donāt have a great answer to. It would be something along the lines of āthereās a grey area where I donāt know how to make those tradeoffs, but most things do not fall into the grey area so Iām not worrying too much about this. If I wouldnāt fund something that supposedly reduces x-risk, itās either that I think it might increase x-risk, or because I think there are better options available for me to fundā. Do you believe that many more choices fall into that grey area?