Perhaps it has worked for you in reducing overconfidence, but it certainly hasn’t worked for yudkowsky. I already linked to you the list of failed prognostications, and he shows no sign of stopping, with the declaration that AI extinction has probability ~1.
I have my concerns about calibration in general. I think they let you get good at estimating short term, predictable events and toy examples, which then gives you overconfidence in your beliefs about long term, unpredictable beliefs and events.
I don’t expect you to dig up a million links when I’m not doing the same. I think it’s important to express these opinions out loud, lest we fall into a false impression of consensus on some of these matters. It is important to me… I simply don’t agree with you.
Perhaps it has worked for you in reducing overconfidence, but it certainly hasn’t worked for yudkowsky. I already linked to you the list of failed prognostications, and he shows no sign of stopping, with the declaration that AI extinction has probability ~1.
I have my concerns about calibration in general. I think they let you get good at estimating short term, predictable events and toy examples, which then gives you overconfidence in your beliefs about long term, unpredictable beliefs and events.
I don’t expect you to dig up a million links when I’m not doing the same. I think it’s important to express these opinions out loud, lest we fall into a false impression of consensus on some of these matters. It is important to me… I simply don’t agree with you.