>I think the sequences are okay intro points for a number of topics, but they should not be treated as the foundation of ones belief system
I’d say the exact opposite—they are a great foundation that for the most part helps form a coherent world view rather than just getting bits and pieces from everywhere and not necesserily connecting them, but you can go explore further in many directions for a more in-depth (and sometimes more modern) perspective.
To be honest, I don’t really see the appeal of the “lesswrong worldview”. It just seems to be the scientific worldview with a bunch of extra ideas of varying and often dubious quality added on. It all comes from one guy with a fairly poor track record of correctness. It seems like a fun social/hobby group more than anything else.
I don’t want to be overly negative because I know LW played a big part in bringing EA up and did originate some of the ideas here. Unfortunately, I think that social dynamic has also probably led to the LW ideas being overrated in the EA community.
The post you linked to literally admits to cherry-picking negative examples only (see quote below), it should not be cited as evidence for a ‘fairly poor track record’.
I didn’t want to spend the time doing a thorough accounting exercise, though, so I decided to drop any claim that the examples were representative and just describe them as “cherry-picked” — and add in lots of caveats emphasising that they’re cherry-picked.
It’s pretty ridiculous to expect someone to go through a complete accounting exercise of every statement someone has ever made before expressing an opinion like that, and I’m guessing it’s not a standard you hold for criticism of anyone else. The cited articles provided plenty of examples of yudkowsky being extremely wrong and refusing to acknowledge their mistakes, which matches with my experience of his writings after years of familiarity with it.
My main point is that I have no reason to hold the opinions of Yudkowsky in higher esteem than that of any other succesful pop-science writer like neil degrasse tyson or richard dawkins or whoever. I find it concerning and a little baffling how much influence this one guy has over EA.
If your headline claim is that someone has a “fairly poor track record of correctness”, then I think “using a representative set of examples” to make your case is the bare-minimum necessary for that to be taken seriously, not an isolated demand for rigor.
A lot of the people who built effective altruism see it as an extension of the LessWrong worldview, and think that that’s the reason why EA is useful to people where so many well-meaning projects are not.
Some random LessWrong things which I think are important (chosen because they come to mind, not because they’re the most important things):
The many people in EA who have read and understand Death Spirals (especially Affective Death Spirals and Evaporative Cooling of Group Beliefs) make EA feel safe and like a community I can trust (instead of feeling like a tiger I could choose to run from or ride, the way most large groups of humans feel to people like me) (the many (and counting) people in EA who haven’t read Death Spirals, make me nervous—we have something special here, most large groups are not safe).
The many people in EA who aim to explain rather than persuade, and who are clear about their epistemic status, make me feel like I can frictionlessly trust their work as much as they do, without being fast-talked into something the author is themself uncertain about (but failed to admit their uncertainty over because that’s not considered good writing). (The post by Ben Garfinkel linked above (the one that admitted up front that it was trying to argue a position and was happy to distort and elide to that end, which was upvoted to +261) contributed to a growing sense of ill-ease. We have something special here, and I’d like to keep it.)
Thought experiments like true objections and least convenient possible worlds swimming around the local noosphere have made conversations about emotionally charged topics much more productive than they are in most corners of the world or internet.
...I was going to say something about noticing confusion and realized that it was already in Quadratic Reciprocity’s post that we are in the replies to. I think that the original post pretty well refutes the idea that the LessWrong mindset is just the default scientific mindset with relatively minor things of dubious usefulness taped on? So I’ll let you decide whether to respond to this before I write more in the same vein as the original post, if the original post was not useful for this purpose.
I’ve read a decent chunk of the sequences, there are plenty of things to like about them, like the norms of friendliness and openess to new ideas you mention.
But I cannot say that I subscribe to the lesswrong worldview, because there are too many things I dislike that come along for the ride. Chiefly, it’s seems to foster sense of extreme overconfidence in beliefs about fields people lack domain-specific knowledge about. As a physicist, I find the writings about science to be shallow, overconfident and often straight up wrong, and this has been the reaction I have seen from most experts when lesswrong touches on their field. (I will save the extensive sourcing for these beliefs for a future post).
I think that EA as a movement has the potential to take the good parts of the lesswrong worldview while abandoning the harmful parts. Unfortunately, I believe too much of the latter still resides within the movement.
I disagree pretty strongly with the headline claim about extreme overconfidence, having found rationalist stuff singularly useful for reducing overconfidence with its major emphasises on falsifiable predictions, calibration, bowing quickly to the weight of the evidence, thinking through failure-states in detail and planning for being wrong.
I could defend this at length, but it’s hard to find the heart to dig up a million links and write a long explanation when it seems unlikely that this is actually important to you or the people who strong-agreed with you.
Perhaps it has worked for you in reducing overconfidence, but it certainly hasn’t worked for yudkowsky. I already linked to you the list of failed prognostications, and he shows no sign of stopping, with the declaration that AI extinction has probability ~1.
I have my concerns about calibration in general. I think they let you get good at estimating short term, predictable events and toy examples, which then gives you overconfidence in your beliefs about long term, unpredictable beliefs and events.
I don’t expect you to dig up a million links when I’m not doing the same. I think it’s important to express these opinions out loud, lest we fall into a false impression of consensus on some of these matters. It is important to me… I simply don’t agree with you.
I don’t think my original post was good at conveying the important bits—in particular, I think I published it too quickly and missed out on elaborating on some parts that were more time-consuming to explain. I like your comment and would enjoy reading more
>I think the sequences are okay intro points for a number of topics, but they should not be treated as the foundation of ones belief system
I’d say the exact opposite—they are a great foundation that for the most part helps form a coherent world view rather than just getting bits and pieces from everywhere and not necesserily connecting them, but you can go explore further in many directions for a more in-depth (and sometimes more modern) perspective.
To be honest, I don’t really see the appeal of the “lesswrong worldview”. It just seems to be the scientific worldview with a bunch of extra ideas of varying and often dubious quality added on. It all comes from one guy with a fairly poor track record of correctness. It seems like a fun social/hobby group more than anything else.
I don’t want to be overly negative because I know LW played a big part in bringing EA up and did originate some of the ideas here. Unfortunately, I think that social dynamic has also probably led to the LW ideas being overrated in the EA community.
The post you linked to literally admits to cherry-picking negative examples only (see quote below), it should not be cited as evidence for a ‘fairly poor track record’.
It’s pretty ridiculous to expect someone to go through a complete accounting exercise of every statement someone has ever made before expressing an opinion like that, and I’m guessing it’s not a standard you hold for criticism of anyone else. The cited articles provided plenty of examples of yudkowsky being extremely wrong and refusing to acknowledge their mistakes, which matches with my experience of his writings after years of familiarity with it.
My main point is that I have no reason to hold the opinions of Yudkowsky in higher esteem than that of any other succesful pop-science writer like neil degrasse tyson or richard dawkins or whoever. I find it concerning and a little baffling how much influence this one guy has over EA.
If your headline claim is that someone has a “fairly poor track record of correctness”, then I think “using a representative set of examples” to make your case is the bare-minimum necessary for that to be taken seriously, not an isolated demand for rigor.
A lot of the people who built effective altruism see it as an extension of the LessWrong worldview, and think that that’s the reason why EA is useful to people where so many well-meaning projects are not.
Some random LessWrong things which I think are important (chosen because they come to mind, not because they’re the most important things):
The many people in EA who have read and understand Death Spirals (especially Affective Death Spirals and Evaporative Cooling of Group Beliefs) make EA feel safe and like a community I can trust (instead of feeling like a tiger I could choose to run from or ride, the way most large groups of humans feel to people like me) (the many (and counting) people in EA who haven’t read Death Spirals, make me nervous—we have something special here, most large groups are not safe).
The many people in EA who aim to explain rather than persuade, and who are clear about their epistemic status, make me feel like I can frictionlessly trust their work as much as they do, without being fast-talked into something the author is themself uncertain about (but failed to admit their uncertainty over because that’s not considered good writing).
(The post by Ben Garfinkel linked above (the one that admitted up front that it was trying to argue a position and was happy to distort and elide to that end, which was upvoted to +261) contributed to a growing sense of ill-ease. We have something special here, and I’d like to keep it.)
Thought experiments like true objections and least convenient possible worlds swimming around the local noosphere have made conversations about emotionally charged topics much more productive than they are in most corners of the world or internet.
...I was going to say something about noticing confusion and realized that it was already in Quadratic Reciprocity’s post that we are in the replies to. I think that the original post pretty well refutes the idea that the LessWrong mindset is just the default scientific mindset with relatively minor things of dubious usefulness taped on? So I’ll let you decide whether to respond to this before I write more in the same vein as the original post, if the original post was not useful for this purpose.
I’ve read a decent chunk of the sequences, there are plenty of things to like about them, like the norms of friendliness and openess to new ideas you mention.
But I cannot say that I subscribe to the lesswrong worldview, because there are too many things I dislike that come along for the ride. Chiefly, it’s seems to foster sense of extreme overconfidence in beliefs about fields people lack domain-specific knowledge about. As a physicist, I find the writings about science to be shallow, overconfident and often straight up wrong, and this has been the reaction I have seen from most experts when lesswrong touches on their field. (I will save the extensive sourcing for these beliefs for a future post).
I think that EA as a movement has the potential to take the good parts of the lesswrong worldview while abandoning the harmful parts. Unfortunately, I believe too much of the latter still resides within the movement.
I disagree pretty strongly with the headline claim about extreme overconfidence, having found rationalist stuff singularly useful for reducing overconfidence with its major emphasises on falsifiable predictions, calibration, bowing quickly to the weight of the evidence, thinking through failure-states in detail and planning for being wrong.
I could defend this at length, but it’s hard to find the heart to dig up a million links and write a long explanation when it seems unlikely that this is actually important to you or the people who strong-agreed with you.
Perhaps it has worked for you in reducing overconfidence, but it certainly hasn’t worked for yudkowsky. I already linked to you the list of failed prognostications, and he shows no sign of stopping, with the declaration that AI extinction has probability ~1.
I have my concerns about calibration in general. I think they let you get good at estimating short term, predictable events and toy examples, which then gives you overconfidence in your beliefs about long term, unpredictable beliefs and events.
I don’t expect you to dig up a million links when I’m not doing the same. I think it’s important to express these opinions out loud, lest we fall into a false impression of consensus on some of these matters. It is important to me… I simply don’t agree with you.
I don’t think my original post was good at conveying the important bits—in particular, I think I published it too quickly and missed out on elaborating on some parts that were more time-consuming to explain. I like your comment and would enjoy reading more