I think I largely agree, except that I think Iām on the fence about the last paragraph.
Regarding existential risk estimates, I do see value in doing research on specific questions that would make us adjust those estimates, and then adjusting them accordingly.
I agree with what you say in this paragraph. But it seems somewhat separate to the question of how valuable it is to elicit and collate current views?
I think my views are roughly as follows:
āMost relevant experts are fairly confident that certain existential risks (e.g., from AI) as substantially more likely than others (e.g., from asteroids or gamma ray bursts). The vast majority of peopleāand a substantial portion of EAs, longtermists, policymakers, etc. - probably arenāt aware experts think that, and might guess that the difference in risk levels is less substantial, or be unable to guess which risks are most likely. (This seems analogous to the situation with large differences in charity cost-effectiveness.) Therefore, eliciting and collecting expertsā views can provide a useful input into other peopleās prioritisation decisions.
That said, on the margin, itāll be very hard to shift the relevant expertsā credences on x-risk levels by more than, for example, a factor of two. And there are often already larger differences in other factors in our decisionsāe.g., tractability of or personal fit for interventions. In addition, we donāt know how much weight to put on expertsā specific credences anyway. So thereās not that much value in trying to further inform the relevant expertsā credences on x-risk levels. (Though the same work that would do that might be very valuable for other reasons, like helping those experts build more detailed models of how risks would occur and what the levers for intervention are.)ā
Does that roughly match your views?
If I thought there was a research project that would cause most people to revise that estimate to, say, 0.1% I do think this would be super valuable.
Just to check, I assume you mean that thereād be a lot of value in a research project that would cause most people to revise that estimate to (say) 0.1%, if indeed the best estimate is (say) 0.1%, and that wouldnāt cause such a revision otherwise?
One alternative thing you might mean: āI think the best estimate is 0.1%, and I think a research project that would cause most people to realise that would be super valuable.ā But Iām guessing thatās not what you mean?
Yes, that sounds roughly right. I hadnāt thought about the value for communicating with broader audiences.
Just to check, I assume you mean that thereād be a lot of value in a research project that would cause most people to revise that estimate to (say) 0.1%, if indeed the best estimate is (say) 0.1%, and that wouldnāt cause such a revision otherwise?
Yes, thatās what I meant.
(I think my own estimate is somewhere between 0.1% and 10% FWIW, but also feels quite unstable and like I donāt trust that number much.)
Thanks, thatās all really interesting.
I think I largely agree, except that I think Iām on the fence about the last paragraph.
I agree with what you say in this paragraph. But it seems somewhat separate to the question of how valuable it is to elicit and collate current views?
I think my views are roughly as follows:
āMost relevant experts are fairly confident that certain existential risks (e.g., from AI) as substantially more likely than others (e.g., from asteroids or gamma ray bursts). The vast majority of peopleāand a substantial portion of EAs, longtermists, policymakers, etc. - probably arenāt aware experts think that, and might guess that the difference in risk levels is less substantial, or be unable to guess which risks are most likely. (This seems analogous to the situation with large differences in charity cost-effectiveness.) Therefore, eliciting and collecting expertsā views can provide a useful input into other peopleās prioritisation decisions.
That said, on the margin, itāll be very hard to shift the relevant expertsā credences on x-risk levels by more than, for example, a factor of two. And there are often already larger differences in other factors in our decisionsāe.g., tractability of or personal fit for interventions. In addition, we donāt know how much weight to put on expertsā specific credences anyway. So thereās not that much value in trying to further inform the relevant expertsā credences on x-risk levels. (Though the same work that would do that might be very valuable for other reasons, like helping those experts build more detailed models of how risks would occur and what the levers for intervention are.)ā
Does that roughly match your views?
Just to check, I assume you mean that thereād be a lot of value in a research project that would cause most people to revise that estimate to (say) 0.1%, if indeed the best estimate is (say) 0.1%, and that wouldnāt cause such a revision otherwise?
One alternative thing you might mean: āI think the best estimate is 0.1%, and I think a research project that would cause most people to realise that would be super valuable.ā But Iām guessing thatās not what you mean?
Yes, that sounds roughly right. I hadnāt thought about the value for communicating with broader audiences.
Yes, thatās what I meant.
(I think my own estimate is somewhere between 0.1% and 10% FWIW, but also feels quite unstable and like I donāt trust that number much.)