I don’t have anything great, but the best thing I could come up with was definitely “I feel most stuck because I don’t know what your cruxes are”.
I started writing a case for why I think AI X-Risk is high, but I really didn’t know whether the things I was writing were going to be hitting at your biggest uncertainties. My sense is you probably read most of the same arguments that I have, so our difference in final opinion is probably generated by some other belief that you have that I don’t, and I don’t really know how to address that preemptively.
I might give it a try anyways, and this doesn’t feel like a defeater, but in this space it’s the biggest thing that came to mind.
Thanks! The part of the post that was supposed to be most responsive to this on size of AI x-risk was this:
For “Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI.” I am pretty sympathetic to the analysis of Joe Carlsmith here. I think Joe’s estimates of the relevant probabilities are pretty reasonable (though the bottom line is perhaps somewhat low) and if someone convinced me that the probabilities on the premises in his argument should be much higher or lower I’d probably update. There are a number of reviews of Joe Carlsmith’s work that were helpful to varying degrees but would not have won large prizes in this competition.
I think explanations of how Joe’s probabilities should be different would help. Alternatively, an explanation of why some other set of propositions was relevant (with probabilities attached and mapped to a conclusion) could help.
I think it’s kinda weird and unproductive to focus a very large prize on things that would change a single person’s views, rather than be robustly persuasive to many people.
E.g. does this imply that you personally control all funding of the FF? (I assume you don’t, but then it’d make sense to try to convince all FF managers, trustees etc.)
FWIW, I would prefer a post on “what actually drives your probabilities” over a “what are the reasons that you think will be most convincing to others”.
I don’t have anything great, but the best thing I could come up with was definitely “I feel most stuck because I don’t know what your cruxes are”.
I started writing a case for why I think AI X-Risk is high, but I really didn’t know whether the things I was writing were going to be hitting at your biggest uncertainties. My sense is you probably read most of the same arguments that I have, so our difference in final opinion is probably generated by some other belief that you have that I don’t, and I don’t really know how to address that preemptively.
I might give it a try anyways, and this doesn’t feel like a defeater, but in this space it’s the biggest thing that came to mind.
Thanks! The part of the post that was supposed to be most responsive to this on size of AI x-risk was this:
I think explanations of how Joe’s probabilities should be different would help. Alternatively, an explanation of why some other set of propositions was relevant (with probabilities attached and mapped to a conclusion) could help.
I think it’s kinda weird and unproductive to focus a very large prize on things that would change a single person’s views, rather than be robustly persuasive to many people.
E.g. does this imply that you personally control all funding of the FF? (I assume you don’t, but then it’d make sense to try to convince all FF managers, trustees etc.)
FWIW, I would prefer a post on “what actually drives your probabilities” over a “what are the reasons that you think will be most convincing to others”.