How much does work in AI safety help the world? Probability distribution version (Oxford Prioritisation Project)

By Tom Sittler

2017-04-26

We’re cen­tral­is­ing all dis­cus­sion on the Effec­tive Altru­ism fo­rum. To dis­cuss this post, please com­ment here.

The Global Pri­ori­ties Pro­ject (GPP) has a model quan­tify­ing the im­pact of adding a re­searcher to the field of AI safety. Quot­ing from GPP:

There’s been some dis­cus­sion lately about whether we can make es­ti­mates of how likely efforts to miti­gate ex­is­ten­tial risk from AI are to suc­ceed and about what rea­son­able es­ti­mates of that prob­a­bil­ity might be. In a re­cent con­ver­sa­tion be­tween the two of us, Daniel men­tioned that he didn’t have a good way to es­ti­mate the prob­a­bil­ity that join­ing the AI safety re­search com­mu­nity would ac­tu­ally avert ex­is­ten­tial catas­tro­phe. Though it would be hard to be cer­tain about this prob­a­bil­ity, it would be nice to have a prin­ci­pled back-of-the-en­velope method for ap­prox­i­mat­ing it. Owen ac­tu­ally has a rough method based on the one he used in his ar­ti­cle Allo­cat­ing risk miti­ga­tion across time, but he never spel­led it out.

I found this model (mod­er­ately) use­ful and turned it into a Guessti­mate model, which you can view here. You can write to me pri­vately and I’ll share my in­puts with you. (So as not to an­chor peo­ple).

Have other peo­ple found this model use­ful? Why, or why not? What would be your in­puts into the model?