I disagree pretty strongly with the headline claim about extreme overconfidence, having found rationalist stuff singularly useful for reducing overconfidence with its major emphasises on falsifiable predictions, calibration, bowing quickly to the weight of the evidence, thinking through failure-states in detail and planning for being wrong.
I could defend this at length, but it’s hard to find the heart to dig up a million links and write a long explanation when it seems unlikely that this is actually important to you or the people who strong-agreed with you.
Perhaps it has worked for you in reducing overconfidence, but it certainly hasn’t worked for yudkowsky. I already linked to you the list of failed prognostications, and he shows no sign of stopping, with the declaration that AI extinction has probability ~1.
I have my concerns about calibration in general. I think they let you get good at estimating short term, predictable events and toy examples, which then gives you overconfidence in your beliefs about long term, unpredictable beliefs and events.
I don’t expect you to dig up a million links when I’m not doing the same. I think it’s important to express these opinions out loud, lest we fall into a false impression of consensus on some of these matters. It is important to me… I simply don’t agree with you.
I disagree pretty strongly with the headline claim about extreme overconfidence, having found rationalist stuff singularly useful for reducing overconfidence with its major emphasises on falsifiable predictions, calibration, bowing quickly to the weight of the evidence, thinking through failure-states in detail and planning for being wrong.
I could defend this at length, but it’s hard to find the heart to dig up a million links and write a long explanation when it seems unlikely that this is actually important to you or the people who strong-agreed with you.
Perhaps it has worked for you in reducing overconfidence, but it certainly hasn’t worked for yudkowsky. I already linked to you the list of failed prognostications, and he shows no sign of stopping, with the declaration that AI extinction has probability ~1.
I have my concerns about calibration in general. I think they let you get good at estimating short term, predictable events and toy examples, which then gives you overconfidence in your beliefs about long term, unpredictable beliefs and events.
I don’t expect you to dig up a million links when I’m not doing the same. I think it’s important to express these opinions out loud, lest we fall into a false impression of consensus on some of these matters. It is important to me… I simply don’t agree with you.