’e.g. integrity, truth-seeking, x-risk reduction) that I care about — such as FTX, OpenAI, Anthropic, CEA, Will MacAskill’s career as a public intellectual — and those that do seem to have closed down or been unsupported (such as FHI, MIRI, CFAR’
Warning, this is coming from quite a tribal place, since I was an Oxford philosopher back when GWWC was first getting started, so consider me biased but:
Obviously FTX was very bad, and the only provably very harmful thing that the community has done so far, but I still want to push back against here. CEA and Will have been heavily involved with the bits of EA that seem to me to have obviously worked fairly well: global development stuff and farm animal welfare campaigning. Many lives have been saved by donations to AMF. Meanwhile, by your own lights, you think it is more likely than not that the most important effect of the Bay Area Rationalist cluster and the FHI has been to speed AI capabilities research that you yourselves think of as near-term extinction risk. It seems like, by your own lights, Will’s career as a public intellectual (as opposed to his and CEA’s involvement in setting up Alameda) has been harmful, to the exact extent that it has promoted ideas about working on AI risk that he got from FHI/MIRI/CFAR people, whilst it has been good otherwise (i.e. when he has been promoting ideas that are closer to the very beginnings of Oxford/GiveWell EA: at least if you agree that global development/animal welfare EA are good in themselves).
Some of the primary projects getting resources from this ecosystem do not seem built using the principles and values (e.g. integrity, truth-seeking, x-risk reduction) that I care about — such as FTX, OpenAI, Anthropic, CEA, Will MacAskill’s career as a public intellectual — and those that do seem to have closed down or been unsupported (such as FHI, MIRI, CFAR). Insofar as these are the primary projects who will reap the benefits of the resources that Lightcone invests into this ecosystem, I would like to change course.
I think the way you quoted it is a bit misleading? I think the thing that is said is rather that Lightcone Offices has been used for and the projects “getting resources” from Lightcone’s work is Will Macaskill’s career as a public intellectual. I think this is linked in with a lot of the harms of rationalists being displaced by EAs. I think the knock on Macaskill is not one of active harm but that it reaps benefits which do not align with OP’s values. I also do not think Will’s AI risk models look like FHI/MIRI/CFAR people’s given how low his p(doom) in WWOTF is.
‘I think this is linked in with a lot of the harms of rationalists being displaced by EAs.‘
Yeah, this is probably some sort of a crux. Forget Will as an individual for a second, my own impression of things is that:
A) EAs as a group have achieved some pretty impressive things, and I expect them/us to continue doing so, for example, on biorisk (whether or not the EA brand survives the current reputational crisis).
B) The rationalists actually have very little in the way of legible achievements as a group, insofar as they are distinct from EAs. (I should note that I have however been very intellectual impressed by the individual rationalists I have interacted with professionally; I’m sure many individual rationalists are smarter and more capable than me!). The main exception that some very technically impressive people in current AI research having been partly inspired by Yudkowsky to get into AI. Which this post itself thinks is probably extremely net bad.
So firstly, I am personally not very keen on the idea that MIRI or CFAR are big contributors to anything good, since I haven’t seen evidence that’s persuaded me otherwise. And secondly, it’s not clear to me that by their own lights the authors should see MIRI or CFAR as major contributors to anything good, since they effectively think that they have been bad for AI X-risk. (They might not quite put it like that, but insofar as you think people being worried about AI X-risk has just sped-up progress, it’s hard not to see MIRI/CFAR/the Bay rationalist scene as a whole as having a large share of the responsibility. ) Given the combination of those 2 things, I am not very happy with the authors portraying rationalism as the ‘good’ thing threatened by bad EA, insofar as that’s a fair reading. (Though I think ‘at least we didn’t do FTX’ is a fair response.)
LessWrong has had a few cults emerge from the ecosystem, but at least some of the hate for e.g Leverage is basically just because Leverage holds up a mirror to mainstream EA/rationalism and mainstream EA just really hates the reflection. “Yes, we are a cult, and what do you think you guys are?”
(incidentally, no one ever talks about the companies/institutions that came out of Leverage, but surely this should be factored into our calculations when we think about the costs & benefits!)
I don’t think that’s really inconsistent with anything I said. And I think that I am arguing here in relative favor of the less cult-y bits of EA. I’ve also never heard anything like the Leverage testimony about any non-rationalist EA org, though obviously that’s not proof it isn’t happening.
I mean, what about Alameda and FTX? Also early CEA. Also what about nonlinear? Of course not exactly the same as Leverage, but neither has any other rationalist-adjacent org.
FTX and Alameda sound extremely bad (obviously worse in effect than Leverage!) to me in a way that is not particularly “cult”, although I get that’s a bit vague (and stories of SBF threatening people are closer to that, as opposed to the massive fraud.) As for the other stuff, I haven’t heard the relevant stories, but you may be right, I am not particularly clued into this stuff, and it’s possible it’s just coincidence I have heard about crazy founder worship, and sleep deprivation etc. , vague stuff about cleansing yourself of “metaphorically” demonic forces, at Leverage but not at those other places. I recall bullying accusations against someone high up at Nonlinear, but not the details. Probably I shouldn’t have made the relative comparison between rationalists and non-rationalists, because I haven’t really been following who all the orgs are, and what they’ve been doing. Though on the other hand, I feel like the rationalists have hit a high enough level of cult-y incidents that the default is probably that other orgs are less like that. But maybe I should have just stuck to ‘there are conflicting reports on whether epistemics are actually all that good in the Bay scene, and some reasonable evidence against.’
This excludes impact on animals (which I think might the major driver in the nearterm), and also longterm impacts. I used to consider the overall impact of GiveWell’s top charities robustly positive, but no longer do. I agree that, mathematically, E(“overall effect”) > 0 if:
“Overall effect” = “nearterm effect on humans” + “nearterm effect on animals” + “longterm effect”.
E(“nearterm effect on humans”) > 0.
E(“nearterm effect on animals” + “longterm effect”) = k E(“nearterm effect on humans”).
k = 0.
However, setting k to 0 seems pretty arbitrary. One could just as well set it to −1, in which case E(“overall effect”) = 0. Since I am not confident |k| << 1, I am not confident either about the sign of E(“overall effect”).
’e.g. integrity, truth-seeking, x-risk reduction) that I care about — such as FTX, OpenAI, Anthropic, CEA, Will MacAskill’s career as a public intellectual — and those that do seem to have closed down or been unsupported (such as FHI, MIRI, CFAR’
Warning, this is coming from quite a tribal place, since I was an Oxford philosopher back when GWWC was first getting started, so consider me biased but:
Obviously FTX was very bad, and the only provably very harmful thing that the community has done so far, but I still want to push back against here. CEA and Will have been heavily involved with the bits of EA that seem to me to have obviously worked fairly well: global development stuff and farm animal welfare campaigning. Many lives have been saved by donations to AMF. Meanwhile, by your own lights, you think it is more likely than not that the most important effect of the Bay Area Rationalist cluster and the FHI has been to speed AI capabilities research that you yourselves think of as near-term extinction risk. It seems like, by your own lights, Will’s career as a public intellectual (as opposed to his and CEA’s involvement in setting up Alameda) has been harmful, to the exact extent that it has promoted ideas about working on AI risk that he got from FHI/MIRI/CFAR people, whilst it has been good otherwise (i.e. when he has been promoting ideas that are closer to the very beginnings of Oxford/GiveWell EA: at least if you agree that global development/animal welfare EA are good in themselves).
I think the way you quoted it is a bit misleading? I think the thing that is said is rather that Lightcone Offices has been used for and the projects “getting resources” from Lightcone’s work is Will Macaskill’s career as a public intellectual. I think this is linked in with a lot of the harms of rationalists being displaced by EAs. I think the knock on Macaskill is not one of active harm but that it reaps benefits which do not align with OP’s values. I also do not think Will’s AI risk models look like FHI/MIRI/CFAR people’s given how low his p(doom) in WWOTF is.
‘I think this is linked in with a lot of the harms of rationalists being displaced by EAs.‘
Yeah, this is probably some sort of a crux. Forget Will as an individual for a second, my own impression of things is that:
A) EAs as a group have achieved some pretty impressive things, and I expect them/us to continue doing so, for example, on biorisk (whether or not the EA brand survives the current reputational crisis).
B) The rationalists actually have very little in the way of legible achievements as a group, insofar as they are distinct from EAs. (I should note that I have however been very intellectual impressed by the individual rationalists I have interacted with professionally; I’m sure many individual rationalists are smarter and more capable than me!). The main exception that some very technically impressive people in current AI research having been partly inspired by Yudkowsky to get into AI. Which this post itself thinks is probably extremely net bad.
So firstly, I am personally not very keen on the idea that MIRI or CFAR are big contributors to anything good, since I haven’t seen evidence that’s persuaded me otherwise. And secondly, it’s not clear to me that by their own lights the authors should see MIRI or CFAR as major contributors to anything good, since they effectively think that they have been bad for AI X-risk. (They might not quite put it like that, but insofar as you think people being worried about AI X-risk has just sped-up progress, it’s hard not to see MIRI/CFAR/the Bay rationalist scene as a whole as having a large share of the responsibility. ) Given the combination of those 2 things, I am not very happy with the authors portraying rationalism as the ‘good’ thing threatened by bad EA, insofar as that’s a fair reading. (Though I think ‘at least we didn’t do FTX’ is a fair response.)
I’d also say that opinions vary on how actually “epistemically healthy” CFAR is: https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe/comment/dLyEcki7dBdxFkvJd
I don’t know who is right here, but having (apparently) ex-employees say this kind of stuff is not a good sign, community epistemics-wise. Nor is being a community in which people regularly either form cults or are wrongly accused of forming cults, as seems to have happened at least 3 times: https://www.lesswrong.com/posts/ygAJyoBK7MhEvnwBc/some-thoughts-on-the-cults-lw-had
https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research
Note: I am absolutely not accusing Ben and Oliver of personally having “bad epistemics”.
LessWrong has had a few cults emerge from the ecosystem, but at least some of the hate for e.g Leverage is basically just because Leverage holds up a mirror to mainstream EA/rationalism and mainstream EA just really hates the reflection. “Yes, we are a cult, and what do you think you guys are?”
(incidentally, no one ever talks about the companies/institutions that came out of Leverage, but surely this should be factored into our calculations when we think about the costs & benefits!)
I don’t think that’s really inconsistent with anything I said. And I think that I am arguing here in relative favor of the less cult-y bits of EA. I’ve also never heard anything like the Leverage testimony about any non-rationalist EA org, though obviously that’s not proof it isn’t happening.
I mean, what about Alameda and FTX? Also early CEA. Also what about nonlinear? Of course not exactly the same as Leverage, but neither has any other rationalist-adjacent org.
FTX and Alameda sound extremely bad (obviously worse in effect than Leverage!) to me in a way that is not particularly “cult”, although I get that’s a bit vague (and stories of SBF threatening people are closer to that, as opposed to the massive fraud.) As for the other stuff, I haven’t heard the relevant stories, but you may be right, I am not particularly clued into this stuff, and it’s possible it’s just coincidence I have heard about crazy founder worship, and sleep deprivation etc. , vague stuff about cleansing yourself of “metaphorically” demonic forces, at Leverage but not at those other places. I recall bullying accusations against someone high up at Nonlinear, but not the details. Probably I shouldn’t have made the relative comparison between rationalists and non-rationalists, because I haven’t really been following who all the orgs are, and what they’ve been doing. Though on the other hand, I feel like the rationalists have hit a high enough level of cult-y incidents that the default is probably that other orgs are less like that. But maybe I should have just stuck to ‘there are conflicting reports on whether epistemics are actually all that good in the Bay scene, and some reasonable evidence against.’
Hi David,
This excludes impact on animals (which I think might the major driver in the nearterm), and also longterm impacts. I used to consider the overall impact of GiveWell’s top charities robustly positive, but no longer do. I agree that, mathematically, E(“overall effect”) > 0 if:
“Overall effect” = “nearterm effect on humans” + “nearterm effect on animals” + “longterm effect”.
E(“nearterm effect on humans”) > 0.
E(“nearterm effect on animals” + “longterm effect”) = k E(“nearterm effect on humans”).
k = 0.
However, setting k to 0 seems pretty arbitrary. One could just as well set it to −1, in which case E(“overall effect”) = 0. Since I am not confident |k| << 1, I am not confident either about the sign of E(“overall effect”).