I guess I make comments like the one I made above because I think fewer people doing EA community building are seriously considering that the actual impact (and expected impact) of the EA movement could be net negative. It might not be, and I’m leaning towards it being positive but I think it is a serious possibility that EA movement causes more harm than good overall, for example via having sped up AI timelines due to DeepMind/OpenAI/Anthropic and a few of the EA community members committing one of the biggest frauds ever. Or more vague things like EAs fuck up cause prioritisation, maximise really hard, and can’t course correct later.
The reason why EA movement could end up being not net harmful is when we are ambitious but prioritise being correct and having good epistemics really hard. This is not the vibe I get when I talk to many community builders. A lot of them seem happy with “make more EAs is good” and forget that the mechanism for EA being positively impactful relies pretty heavily on our ability to steer correctly. I think they’ve decided too quickly that “EA movement good therefore I must protect and grow it”. I think EA ideas are really good, less sure about the movement.
I like EA ideas, I think my sanely trying to solve the biggest problems is a good thing. I am less sure about the current EA movement, partly because of the track record of the movement so far and partly because of intuitions that movements that are as into gaining influence and recruiting more people will go off track and it doesn’t to me look like there’s enough being done to preserve people’s sanity and get them to think clearly in the face of the mind-warping effects of the movement.
I think it could both be true that we need a healthy EA (or longtermist) movement to make it through this century and that the current EA movement ends up causing more harm than good. Just to be clear, I currently think that in the current trajectory, the EA movement will end up being net good but I am not super confident in this.
Also, sorry my answer is mostly just coming from thinking about AI x-risk stuff rather than EA as a whole.
Huh, not sure what you mean. Sure seems like the FTX fraud was committed by prominent EAs, in the name of EA principles, using the resources of the EA movement. In as much as EA has caused anything, I feel like it has caused the FTX fraud.
Like, by the same logic you could be like “EA didn’t cause millions of dollars to be allocated to malaria nets”. And like, yeah, there is something fair about that, in the sense that it was ultimately individual people or philanthropists who gave money to EA causes, but at the end of the day, if you get to take some credit for Dustin’s giving, you also have to take some blame for Sam’s fraud.
I really, really, realllllly disagree. Saying that EA caused FTX is more like saying EA caused Facebook than the contrapositive. You should have a pretty firm prior that someone who becomes a billionaire does it primarily because they enjoy the immense status and prestige that being “the world’s richest U30” bestows on a person; likewise someone committing fraud to keep that status.
My primary character assessment at this point is that he was an EA who was also one of those flavors of people who become quasi-sociopaths when they become rich and powerful. Nothing in Sam’s actual, concrete actions seem to indicate differently, and indeed he actually spent the grander part of that money on consumption goods like mansions for himself and his coconspirators. Maybe he really was in it for the good, at the beginning, but I just can’t believe that someone making a late decision to start a ponzi scheme “for the greater good” would act like he did.
(Also, using the resources of the EA movement how, exactly? Seems to me like his fraud would have been just as effective had he not identified as an EA. He received investment and consumer funds because of the firm’s growth rate and Alameda’s generous trades, respectively, not because people were interested in contributing to his charities.)
I don’t really understand the distinction here. If a core member of the EA community had founded Facebook, recruiting for its leadership primarily from members of EA, and was acting throughout as a pretty prominent member of the EA community, I would also say that “EA had a substantial responsibility in causing Facebook”. But actual Facebook was founded before EA was even a thing, so this seems totally non-comparable to me.
And while I don’t really buy your character-assessment, I don’t really see what this has to do with the blame analysis. If EA has some prominent members who are sociopaths, we should take responsibility for that in the same way as we would take credit for some prominent members who are saints.
Separately, this part seems confidently wrong:
grander part of that money on consumption goods like mansions for himself and his coconspirators
I am quite confident Sam spent <$100MM on consumption, and the FTX Future Fund has given away on the order of $400MM in grants, so this statement is off by around a factor of 2, and more likely by a full order of magnitude, though that depends a bunch on how you count political contributions and other stuff that’s kind of ambiguously charity vs. helpful to Sam.
the FTX Future Fund has given away more than $400MM in grants
Do you have links/evidence here? I remember counting less than 250M when I looked at their old website, not even accounting for some of the promised grants that presumably never got paid out.
I remember this number came up in conversations at some point, so don’t have any source. Plausible the number is lower by a factor of 2 (I actually was planning to change that line to reflect my uncertainty better and edit to “on the order of $400MM grants” since $600MM wouldn’t have surprised me, and neither would have $200MM).
I don’t really understand the distinction here. If a core member of the EA community had founded Facebook...[snip]...and was acting throughout as a pretty prominent member of the EA community, I would also say that “EA had a substantial responsibility in causing Facebook”
I likewise don’t understand what you’re finding weird about my position? If Eliezer Yudkowsky robbed a bank, that wouldn’t make LessWrong “responsible for a bank robbery”, even if Eliezer Yudkowsky were in the habit of donating a proportion of his money to AI alignment organizations. Looking at the AU-EY grabbing the money out of the brown paper bag and throwing it at strippers, you would conclude he mostly did it for his own reasons, just like you would say of a robber that happened to be a congressman.
If we could look into AU-EY’s mind and see that he thought was doing it “in the name of EA”, and indeed donated the robbed funds to charity, then, sure, I’d freely grant that EA is at least highly complicit—but my point is that I don’t believe that was SBF’s main motivation for founding FTX, and think absent EA he probably had a similar outset chance of running such frauds. You can say that SBF’s being a conditional sociopath is immaterial to his reducing “the group of people with the EA sticker’s point total”, but it’s relevant for answering the more productive question of whether EA made him more or less likely to commit massive fraud.
[unsnip]...recruiting for its leadership primarily from members of EA...[/unsnip]
Well, I guess recruiting from EA leadership is one thing, but to what extent did FTX actually benefit from an EA-affiliated talent pool? I reviewed most of the executive team during my manifold betting and didn’t actually come across anybody who I could find had a history of EA affiliation besides SBF (though you may know more than me).
I am quite confident Sam spent <$100MM on consumption, and the FTX Future Fund has given away more than $400MM in grants, so this statement is off by a factor of 4, and more likely by a full order of magnitude.
I actually didn’t know that. Is this counting the Anthropic investment or did FTXFF really give-away give-away that much money?
I reviewed most of the executive team during my manifold betting and didn’t actually come across anybody who I could find had a history of EA affiliation besides SBF (though you may know more than me).
That… seems really confused to me. Caroline was part of EA Stanford, almost all the early Alameda staff was heavily-involved EAs (including past CEA CEO Tara MacAulay). I know less about Nishad but he was definitely very heavily motivated by an EA philosophy while he was working at FTX, had read a lot of the LessWrong content, etc.
According to FTX’s director of engineering Nishad Singh, Alameda “couldn’t have taken off without EA,” because “all the employees, all the funding—everything was EA to start with.”
It seems really quite beyond a doubt to me that FTX wouldn’t have really existed without the EA community existing. Even the early funding for Alameda was downstream of a bunch of EA funders.
I likewise don’t understand what you’re finding weird about my position? If Eliezer Yudkowsky robbed a bank, that wouldn’t make LessWrong “responsible for a bank robbery”
I mean, if Eliezer robbed a bank, I think I would definitely think the rationality community is responsible for a bank robbery (not “LessWrong”, which is a website). That seems like the only consistent position by which the rationality community can be responsible for anything, including good things. If the rationality community is not responsible for Eliezer robbing a bank, then it definitely can’t be responsible for any substantial fraction of AI Alignment research either, which is usually more indirectly downstream of the core people in the community.
It seems really quite beyond a doubt to me that FTX wouldn’t have really existed without the EA community existing. Even the early funding for Alameda was downstream of a bunch of EA funders.
Yeah, I guess I’m just wrong then. I’m confused as to why I didn’t remember reading the bit about Caroline in particular—it’s literally on her wikipedia page that she was an EA at Stanford.
I mean, if Eliezer robbed a bank, I think I would definitely think the rationality community is responsible for a bank robbery (not “LessWrong”, which is a website). That seems like the only consistent position by which the rationality community can be responsible for anything, including good things. If the rationality community is not responsible for Eliezer robbing a bank, then it definitely can’t be responsible for any substantial fraction of AI Alignment research either, which is usually more indirectly downstream of the core people in the community.
FWIW I still don’t understand this perspective, at all. It seems bizarre. The word “responsible” implies some sort of causal relationship between the ideology and the action; i.e., Eliezer + Exposure to/Existence of rationalist community --> Robbed bank. Obviously AI Alignment research is downstream of rationalism, because you can make an argument, at least, that some AI alignment research wouldn’t have happened if those researchers hadn’t been introduced to the field by LessWrong et. al. But just because Eliezer does something doesn’t mean rationalism is responsible for it any more than Calculus or the scientific method was “responsible” for Isaac Newton’s neuroticisms.
It sounds like the problem is you’re using the term “Rationality Community” to mean “all of the humans who make up the rationality community” and I’m using the term “Rationality Community” to refer to the social network. But I prefer my definition, because I’d rather discuss the social network and the ideology than the group of people, because the people would exist regardless, and what we really want to talk about is whether or not the social network is +EV.
The word “responsible” implies some sort of causal relationship between the ideology and the action
No, it implies a causal relationship between the community and the action. I don’t see any reason to constrain blame to “being caused by the ideology of the community”. If members of the community cause it, and the existence of the community had a pretty direct effect, then it sure seems like you should hold the community responsible.
In your last paragraph, you sure are also conflating between “ideology” and “social network”. It seems really clear that the social network of EA played a huge role in FTX’s existence, so it seems like you would agree that the community should play some role, but then for some reason you are then additionally constraining things to the effect of some ill-specified ideology. Like, can a community of people with no shared ideology literally not be blamed for anything?
It seems really clear that the social network of EA played a huge role in FTX’s existence, so it seems like you would agree that the community should play some role, but then for some reason you are then additionally constraining things to the effect of some ill-specified ideology
No, I agree with you now that at the very least EA is highly complicit if not genuinely entirely responsible for causing FTX.
I don’t think we actually disagree on anything at this point. I’m just pointing out that, if the community completely disbanded and LessWrong shut down and rationalists stopped talking to each other and trained themselves not to think about things in rationalist terms, and after all that AU-Yudkowsky still decided to rob a bank, then there’s a meaningful sense in which the Inevitable Robbery was never “the rationality community’s” fault even though AU-Yudkowsky is a quintessential member. At least, it implies a different sort of calculus WRT considering the alternative world without the rationality community.
I guess I make comments like the one I made above because I think fewer people doing EA community building are seriously considering that the actual impact (and expected impact) of the EA movement could be net negative. It might not be, and I’m leaning towards it being positive but I think it is a serious possibility that EA movement causes more harm than good overall, for example via having sped up AI timelines due to DeepMind/OpenAI/Anthropic and a few of the EA community members committing one of the biggest frauds ever. Or more vague things like EAs fuck up cause prioritisation, maximise really hard, and can’t course correct later.
The reason why EA movement could end up being not net harmful is when we are ambitious but prioritise being correct and having good epistemics really hard. This is not the vibe I get when I talk to many community builders. A lot of them seem happy with “make more EAs is good” and forget that the mechanism for EA being positively impactful relies pretty heavily on our ability to steer correctly. I think they’ve decided too quickly that “EA movement good therefore I must protect and grow it”. I think EA ideas are really good, less sure about the movement.
If EA is net harmful then people shouldn’t work directly on solving problems either, we should just pack up and go home.
I like EA ideas, I think my sanely trying to solve the biggest problems is a good thing. I am less sure about the current EA movement, partly because of the track record of the movement so far and partly because of intuitions that movements that are as into gaining influence and recruiting more people will go off track and it doesn’t to me look like there’s enough being done to preserve people’s sanity and get them to think clearly in the face of the mind-warping effects of the movement.
I think it could both be true that we need a healthy EA (or longtermist) movement to make it through this century and that the current EA movement ends up causing more harm than good. Just to be clear, I currently think that in the current trajectory, the EA movement will end up being net good but I am not super confident in this.
Also, sorry my answer is mostly just coming from thinking about AI x-risk stuff rather than EA as a whole.
EA didn’t cause the FTX fraud.
Huh, not sure what you mean. Sure seems like the FTX fraud was committed by prominent EAs, in the name of EA principles, using the resources of the EA movement. In as much as EA has caused anything, I feel like it has caused the FTX fraud.
Like, by the same logic you could be like “EA didn’t cause millions of dollars to be allocated to malaria nets”. And like, yeah, there is something fair about that, in the sense that it was ultimately individual people or philanthropists who gave money to EA causes, but at the end of the day, if you get to take some credit for Dustin’s giving, you also have to take some blame for Sam’s fraud.
I really, really, realllllly disagree. Saying that EA caused FTX is more like saying EA caused Facebook than the contrapositive. You should have a pretty firm prior that someone who becomes a billionaire does it primarily because they enjoy the immense status and prestige that being “the world’s richest U30” bestows on a person; likewise someone committing fraud to keep that status.
My primary character assessment at this point is that he was an EA who was also one of those flavors of people who become quasi-sociopaths when they become rich and powerful. Nothing in Sam’s actual, concrete actions seem to indicate differently, and indeed he actually spent the grander part of that money on consumption goods like mansions for himself and his coconspirators. Maybe he really was in it for the good, at the beginning, but I just can’t believe that someone making a late decision to start a ponzi scheme “for the greater good” would act like he did.
(Also, using the resources of the EA movement how, exactly? Seems to me like his fraud would have been just as effective had he not identified as an EA. He received investment and consumer funds because of the firm’s growth rate and Alameda’s generous trades, respectively, not because people were interested in contributing to his charities.)
I don’t really understand the distinction here. If a core member of the EA community had founded Facebook, recruiting for its leadership primarily from members of EA, and was acting throughout as a pretty prominent member of the EA community, I would also say that “EA had a substantial responsibility in causing Facebook”. But actual Facebook was founded before EA was even a thing, so this seems totally non-comparable to me.
And while I don’t really buy your character-assessment, I don’t really see what this has to do with the blame analysis. If EA has some prominent members who are sociopaths, we should take responsibility for that in the same way as we would take credit for some prominent members who are saints.
Separately, this part seems confidently wrong:
I am quite confident Sam spent <$100MM on consumption, and the FTX Future Fund has given away on the order of $400MM in grants, so this statement is off by around a factor of 2, and more likely by a full order of magnitude, though that depends a bunch on how you count political contributions and other stuff that’s kind of ambiguously charity vs. helpful to Sam.
Do you have links/evidence here? I remember counting less than 250M when I looked at their old website, not even accounting for some of the promised grants that presumably never got paid out.
I remember this number came up in conversations at some point, so don’t have any source. Plausible the number is lower by a factor of 2 (I actually was planning to change that line to reflect my uncertainty better and edit to “on the order of $400MM grants” since $600MM wouldn’t have surprised me, and neither would have $200MM).
I likewise don’t understand what you’re finding weird about my position? If Eliezer Yudkowsky robbed a bank, that wouldn’t make LessWrong “responsible for a bank robbery”, even if Eliezer Yudkowsky were in the habit of donating a proportion of his money to AI alignment organizations. Looking at the AU-EY grabbing the money out of the brown paper bag and throwing it at strippers, you would conclude he mostly did it for his own reasons, just like you would say of a robber that happened to be a congressman.
If we could look into AU-EY’s mind and see that he thought was doing it “in the name of EA”, and indeed donated the robbed funds to charity, then, sure, I’d freely grant that EA is at least highly complicit—but my point is that I don’t believe that was SBF’s main motivation for founding FTX, and think absent EA he probably had a similar outset chance of running such frauds. You can say that SBF’s being a conditional sociopath is immaterial to his reducing “the group of people with the EA sticker’s point total”, but it’s relevant for answering the more productive question of whether EA made him more or less likely to commit massive fraud.
Well, I guess recruiting from EA leadership is one thing, but to what extent did FTX actually benefit from an EA-affiliated talent pool? I reviewed most of the executive team during my manifold betting and didn’t actually come across anybody who I could find had a history of EA affiliation besides SBF (though you may know more than me).
I actually didn’t know that. Is this counting the Anthropic investment or did FTXFF really give-away give-away that much money?
That… seems really confused to me. Caroline was part of EA Stanford, almost all the early Alameda staff was heavily-involved EAs (including past CEA CEO Tara MacAulay). I know less about Nishad but he was definitely very heavily motivated by an EA philosophy while he was working at FTX, had read a lot of the LessWrong content, etc.
There is also this explicit quote by Nishad from before the collapse:
It seems really quite beyond a doubt to me that FTX wouldn’t have really existed without the EA community existing. Even the early funding for Alameda was downstream of a bunch of EA funders.
I mean, if Eliezer robbed a bank, I think I would definitely think the rationality community is responsible for a bank robbery (not “LessWrong”, which is a website). That seems like the only consistent position by which the rationality community can be responsible for anything, including good things. If the rationality community is not responsible for Eliezer robbing a bank, then it definitely can’t be responsible for any substantial fraction of AI Alignment research either, which is usually more indirectly downstream of the core people in the community.
Yeah, I guess I’m just wrong then. I’m confused as to why I didn’t remember reading the bit about Caroline in particular—it’s literally on her wikipedia page that she was an EA at Stanford.
FWIW I still don’t understand this perspective, at all. It seems bizarre. The word “responsible” implies some sort of causal relationship between the ideology and the action; i.e., Eliezer + Exposure to/Existence of rationalist community --> Robbed bank. Obviously AI Alignment research is downstream of rationalism, because you can make an argument, at least, that some AI alignment research wouldn’t have happened if those researchers hadn’t been introduced to the field by LessWrong et. al. But just because Eliezer does something doesn’t mean rationalism is responsible for it any more than Calculus or the scientific method was “responsible” for Isaac Newton’s neuroticisms.
It sounds like the problem is you’re using the term “Rationality Community” to mean “all of the humans who make up the rationality community” and I’m using the term “Rationality Community” to refer to the social network. But I prefer my definition, because I’d rather discuss the social network and the ideology than the group of people, because the people would exist regardless, and what we really want to talk about is whether or not the social network is +EV.
No, it implies a causal relationship between the community and the action. I don’t see any reason to constrain blame to “being caused by the ideology of the community”. If members of the community cause it, and the existence of the community had a pretty direct effect, then it sure seems like you should hold the community responsible.
In your last paragraph, you sure are also conflating between “ideology” and “social network”. It seems really clear that the social network of EA played a huge role in FTX’s existence, so it seems like you would agree that the community should play some role, but then for some reason you are then additionally constraining things to the effect of some ill-specified ideology. Like, can a community of people with no shared ideology literally not be blamed for anything?
No, I agree with you now that at the very least EA is highly complicit if not genuinely entirely responsible for causing FTX.
I don’t think we actually disagree on anything at this point. I’m just pointing out that, if the community completely disbanded and LessWrong shut down and rationalists stopped talking to each other and trained themselves not to think about things in rationalist terms, and after all that AU-Yudkowsky still decided to rob a bank, then there’s a meaningful sense in which the Inevitable Robbery was never “the rationality community’s” fault even though AU-Yudkowsky is a quintessential member. At least, it implies a different sort of calculus WRT considering the alternative world without the rationality community.
The Anthropic investment alone is $500MM, this is in addition to that.
Ok, that gives me some pause about his motivations… Probably enough to change my opinion entirely, but, still.