Some of the primary projects getting resources from this ecosystem do not seem built using the principles and values (e.g. integrity, truth-seeking, x-risk reduction) that I care about — such as FTX, OpenAI, Anthropic, CEA, Will MacAskill’s career as a public intellectual — and those that do seem to have closed down or been unsupported (such as FHI, MIRI, CFAR). Insofar as these are the primary projects who will reap the benefits of the resources that Lightcone invests into this ecosystem, I would like to change course.
I think the way you quoted it is a bit misleading? I think the thing that is said is rather that Lightcone Offices has been used for and the projects “getting resources” from Lightcone’s work is Will Macaskill’s career as a public intellectual. I think this is linked in with a lot of the harms of rationalists being displaced by EAs. I think the knock on Macaskill is not one of active harm but that it reaps benefits which do not align with OP’s values. I also do not think Will’s AI risk models look like FHI/MIRI/CFAR people’s given how low his p(doom) in WWOTF is.
‘I think this is linked in with a lot of the harms of rationalists being displaced by EAs.‘
Yeah, this is probably some sort of a crux. Forget Will as an individual for a second, my own impression of things is that:
A) EAs as a group have achieved some pretty impressive things, and I expect them/us to continue doing so, for example, on biorisk (whether or not the EA brand survives the current reputational crisis).
B) The rationalists actually have very little in the way of legible achievements as a group, insofar as they are distinct from EAs. (I should note that I have however been very intellectual impressed by the individual rationalists I have interacted with professionally; I’m sure many individual rationalists are smarter and more capable than me!). The main exception that some very technically impressive people in current AI research having been partly inspired by Yudkowsky to get into AI. Which this post itself thinks is probably extremely net bad.
So firstly, I am personally not very keen on the idea that MIRI or CFAR are big contributors to anything good, since I haven’t seen evidence that’s persuaded me otherwise. And secondly, it’s not clear to me that by their own lights the authors should see MIRI or CFAR as major contributors to anything good, since they effectively think that they have been bad for AI X-risk. (They might not quite put it like that, but insofar as you think people being worried about AI X-risk has just sped-up progress, it’s hard not to see MIRI/CFAR/the Bay rationalist scene as a whole as having a large share of the responsibility. ) Given the combination of those 2 things, I am not very happy with the authors portraying rationalism as the ‘good’ thing threatened by bad EA, insofar as that’s a fair reading. (Though I think ‘at least we didn’t do FTX’ is a fair response.)
LessWrong has had a few cults emerge from the ecosystem, but at least some of the hate for e.g Leverage is basically just because Leverage holds up a mirror to mainstream EA/rationalism and mainstream EA just really hates the reflection. “Yes, we are a cult, and what do you think you guys are?”
(incidentally, no one ever talks about the companies/institutions that came out of Leverage, but surely this should be factored into our calculations when we think about the costs & benefits!)
I don’t think that’s really inconsistent with anything I said. And I think that I am arguing here in relative favor of the less cult-y bits of EA. I’ve also never heard anything like the Leverage testimony about any non-rationalist EA org, though obviously that’s not proof it isn’t happening.
I mean, what about Alameda and FTX? Also early CEA. Also what about nonlinear? Of course not exactly the same as Leverage, but neither has any other rationalist-adjacent org.
FTX and Alameda sound extremely bad (obviously worse in effect than Leverage!) to me in a way that is not particularly “cult”, although I get that’s a bit vague (and stories of SBF threatening people are closer to that, as opposed to the massive fraud.) As for the other stuff, I haven’t heard the relevant stories, but you may be right, I am not particularly clued into this stuff, and it’s possible it’s just coincidence I have heard about crazy founder worship, and sleep deprivation etc. , vague stuff about cleansing yourself of “metaphorically” demonic forces, at Leverage but not at those other places. I recall bullying accusations against someone high up at Nonlinear, but not the details. Probably I shouldn’t have made the relative comparison between rationalists and non-rationalists, because I haven’t really been following who all the orgs are, and what they’ve been doing. Though on the other hand, I feel like the rationalists have hit a high enough level of cult-y incidents that the default is probably that other orgs are less like that. But maybe I should have just stuck to ‘there are conflicting reports on whether epistemics are actually all that good in the Bay scene, and some reasonable evidence against.’
I think the way you quoted it is a bit misleading? I think the thing that is said is rather that Lightcone Offices has been used for and the projects “getting resources” from Lightcone’s work is Will Macaskill’s career as a public intellectual. I think this is linked in with a lot of the harms of rationalists being displaced by EAs. I think the knock on Macaskill is not one of active harm but that it reaps benefits which do not align with OP’s values. I also do not think Will’s AI risk models look like FHI/MIRI/CFAR people’s given how low his p(doom) in WWOTF is.
‘I think this is linked in with a lot of the harms of rationalists being displaced by EAs.‘
Yeah, this is probably some sort of a crux. Forget Will as an individual for a second, my own impression of things is that:
A) EAs as a group have achieved some pretty impressive things, and I expect them/us to continue doing so, for example, on biorisk (whether or not the EA brand survives the current reputational crisis).
B) The rationalists actually have very little in the way of legible achievements as a group, insofar as they are distinct from EAs. (I should note that I have however been very intellectual impressed by the individual rationalists I have interacted with professionally; I’m sure many individual rationalists are smarter and more capable than me!). The main exception that some very technically impressive people in current AI research having been partly inspired by Yudkowsky to get into AI. Which this post itself thinks is probably extremely net bad.
So firstly, I am personally not very keen on the idea that MIRI or CFAR are big contributors to anything good, since I haven’t seen evidence that’s persuaded me otherwise. And secondly, it’s not clear to me that by their own lights the authors should see MIRI or CFAR as major contributors to anything good, since they effectively think that they have been bad for AI X-risk. (They might not quite put it like that, but insofar as you think people being worried about AI X-risk has just sped-up progress, it’s hard not to see MIRI/CFAR/the Bay rationalist scene as a whole as having a large share of the responsibility. ) Given the combination of those 2 things, I am not very happy with the authors portraying rationalism as the ‘good’ thing threatened by bad EA, insofar as that’s a fair reading. (Though I think ‘at least we didn’t do FTX’ is a fair response.)
I’d also say that opinions vary on how actually “epistemically healthy” CFAR is: https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe/comment/dLyEcki7dBdxFkvJd
I don’t know who is right here, but having (apparently) ex-employees say this kind of stuff is not a good sign, community epistemics-wise. Nor is being a community in which people regularly either form cults or are wrongly accused of forming cults, as seems to have happened at least 3 times: https://www.lesswrong.com/posts/ygAJyoBK7MhEvnwBc/some-thoughts-on-the-cults-lw-had
https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research
Note: I am absolutely not accusing Ben and Oliver of personally having “bad epistemics”.
LessWrong has had a few cults emerge from the ecosystem, but at least some of the hate for e.g Leverage is basically just because Leverage holds up a mirror to mainstream EA/rationalism and mainstream EA just really hates the reflection. “Yes, we are a cult, and what do you think you guys are?”
(incidentally, no one ever talks about the companies/institutions that came out of Leverage, but surely this should be factored into our calculations when we think about the costs & benefits!)
I don’t think that’s really inconsistent with anything I said. And I think that I am arguing here in relative favor of the less cult-y bits of EA. I’ve also never heard anything like the Leverage testimony about any non-rationalist EA org, though obviously that’s not proof it isn’t happening.
I mean, what about Alameda and FTX? Also early CEA. Also what about nonlinear? Of course not exactly the same as Leverage, but neither has any other rationalist-adjacent org.
FTX and Alameda sound extremely bad (obviously worse in effect than Leverage!) to me in a way that is not particularly “cult”, although I get that’s a bit vague (and stories of SBF threatening people are closer to that, as opposed to the massive fraud.) As for the other stuff, I haven’t heard the relevant stories, but you may be right, I am not particularly clued into this stuff, and it’s possible it’s just coincidence I have heard about crazy founder worship, and sleep deprivation etc. , vague stuff about cleansing yourself of “metaphorically” demonic forces, at Leverage but not at those other places. I recall bullying accusations against someone high up at Nonlinear, but not the details. Probably I shouldn’t have made the relative comparison between rationalists and non-rationalists, because I haven’t really been following who all the orgs are, and what they’ve been doing. Though on the other hand, I feel like the rationalists have hit a high enough level of cult-y incidents that the default is probably that other orgs are less like that. But maybe I should have just stuck to ‘there are conflicting reports on whether epistemics are actually all that good in the Bay scene, and some reasonable evidence against.’