I’ve read a decent chunk of the sequences, there are plenty of things to like about them, like the norms of friendliness and openess to new ideas you mention.
But I cannot say that I subscribe to the lesswrong worldview, because there are too many things I dislike that come along for the ride. Chiefly, it’s seems to foster sense of extreme overconfidence in beliefs about fields people lack domain-specific knowledge about. As a physicist, I find the writings about science to be shallow, overconfident and often straight up wrong, and this has been the reaction I have seen from most experts when lesswrong touches on their field. (I will save the extensive sourcing for these beliefs for a future post).
I think that EA as a movement has the potential to take the good parts of the lesswrong worldview while abandoning the harmful parts. Unfortunately, I believe too much of the latter still resides within the movement.
I disagree pretty strongly with the headline claim about extreme overconfidence, having found rationalist stuff singularly useful for reducing overconfidence with its major emphasises on falsifiable predictions, calibration, bowing quickly to the weight of the evidence, thinking through failure-states in detail and planning for being wrong.
I could defend this at length, but it’s hard to find the heart to dig up a million links and write a long explanation when it seems unlikely that this is actually important to you or the people who strong-agreed with you.
Perhaps it has worked for you in reducing overconfidence, but it certainly hasn’t worked for yudkowsky. I already linked to you the list of failed prognostications, and he shows no sign of stopping, with the declaration that AI extinction has probability ~1.
I have my concerns about calibration in general. I think they let you get good at estimating short term, predictable events and toy examples, which then gives you overconfidence in your beliefs about long term, unpredictable beliefs and events.
I don’t expect you to dig up a million links when I’m not doing the same. I think it’s important to express these opinions out loud, lest we fall into a false impression of consensus on some of these matters. It is important to me… I simply don’t agree with you.
I’ve read a decent chunk of the sequences, there are plenty of things to like about them, like the norms of friendliness and openess to new ideas you mention.
But I cannot say that I subscribe to the lesswrong worldview, because there are too many things I dislike that come along for the ride. Chiefly, it’s seems to foster sense of extreme overconfidence in beliefs about fields people lack domain-specific knowledge about. As a physicist, I find the writings about science to be shallow, overconfident and often straight up wrong, and this has been the reaction I have seen from most experts when lesswrong touches on their field. (I will save the extensive sourcing for these beliefs for a future post).
I think that EA as a movement has the potential to take the good parts of the lesswrong worldview while abandoning the harmful parts. Unfortunately, I believe too much of the latter still resides within the movement.
I disagree pretty strongly with the headline claim about extreme overconfidence, having found rationalist stuff singularly useful for reducing overconfidence with its major emphasises on falsifiable predictions, calibration, bowing quickly to the weight of the evidence, thinking through failure-states in detail and planning for being wrong.
I could defend this at length, but it’s hard to find the heart to dig up a million links and write a long explanation when it seems unlikely that this is actually important to you or the people who strong-agreed with you.
Perhaps it has worked for you in reducing overconfidence, but it certainly hasn’t worked for yudkowsky. I already linked to you the list of failed prognostications, and he shows no sign of stopping, with the declaration that AI extinction has probability ~1.
I have my concerns about calibration in general. I think they let you get good at estimating short term, predictable events and toy examples, which then gives you overconfidence in your beliefs about long term, unpredictable beliefs and events.
I don’t expect you to dig up a million links when I’m not doing the same. I think it’s important to express these opinions out loud, lest we fall into a false impression of consensus on some of these matters. It is important to me… I simply don’t agree with you.