Thanks! The part of the post that was supposed to be most responsive to this on size of AI x-risk was this:
For “Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI.” I am pretty sympathetic to the analysis of Joe Carlsmith here. I think Joe’s estimates of the relevant probabilities are pretty reasonable (though the bottom line is perhaps somewhat low) and if someone convinced me that the probabilities on the premises in his argument should be much higher or lower I’d probably update. There are a number of reviews of Joe Carlsmith’s work that were helpful to varying degrees but would not have won large prizes in this competition.
I think explanations of how Joe’s probabilities should be different would help. Alternatively, an explanation of why some other set of propositions was relevant (with probabilities attached and mapped to a conclusion) could help.
I think it’s kinda weird and unproductive to focus a very large prize on things that would change a single person’s views, rather than be robustly persuasive to many people.
E.g. does this imply that you personally control all funding of the FF? (I assume you don’t, but then it’d make sense to try to convince all FF managers, trustees etc.)
Thanks! The part of the post that was supposed to be most responsive to this on size of AI x-risk was this:
I think explanations of how Joe’s probabilities should be different would help. Alternatively, an explanation of why some other set of propositions was relevant (with probabilities attached and mapped to a conclusion) could help.
I think it’s kinda weird and unproductive to focus a very large prize on things that would change a single person’s views, rather than be robustly persuasive to many people.
E.g. does this imply that you personally control all funding of the FF? (I assume you don’t, but then it’d make sense to try to convince all FF managers, trustees etc.)