What if there were a norm in EA of not accepting large amounts of funding unless a third-party auditor of some sort has done a thorough review of the funderâs finances and found them to above-board? Obviously lots of variables in this proposal, but I think something like this is plausibly good and would be interested to hear pushback.
I donât know much about how this all works but how relevant do you think this point is?
If Sequoia Capital can get fooledâpresumably after more due diligence and apparent access to books than you could possibly have gotten while dealing with the charitable arm of FTX FF that was itself almost certainly in the darkâthen there is no reasonable way you could have known.
[Edit: I donât think the OP had included the Eliezer tweet in the question when I originally posted this. My point is basically already covered in the OP now.]
Itâs a relevant point but I think we can reasonably expect EA leadership to do better at vetting megadonors than Sequoia due to (a) more context on the situation, e.g. EAs should have known more about SBFâs past than Sequoia and/âor could have found it out more easily via social and professional connections (b) more incentive to avoid downside risks, e.g. the SBF blowup matters a lot more for EAâs reputation than Sequoiaâs.
To be clear, this does not apply to charities receiving money from FTXFF, that is a separate question from EA leadership.
You expect the people being given free cash to do a better job of due diligence than the people handing someone a giant cash pile?
Not to mention that the Future Fund donations probably did more good for EA causes than the reputational damage is going to do harm to them (making the further assumption that this is actually net reputational damage, as opposed to a bunch of free coverage that pushes some people off and attracts some other people).
If we are so f*****g clever as to know what risks everyone else misses and how to avoid them, how come we didnât spot that one of our best and brightest was actually a massive fraudster
And also the communityâs reputation to a very significant degree. It was arguably the biggest mistake EA has made thus far (or the biggest one that has become obvious â could imagine weâre making other mistakes that arenât yet obvious).
Do you think EAs should stop having opinions on extinction risks because they made a mistake in a different relevant domain (insufficient cynicism in the social realm)? I donât see the logic here.
I think a) and b) are good points. Although thereâs also c) itâs reasonable to give extra trust points to a member of the community whoâs just given you a not-insignificant part of their wealth to spend on charitable endeavours as you see fit.
Note that Iâm obviously not saying this implied SBF was super trustworthy on balance, just that itâs a reasonable consideration pushing in the other direction when making the comparison with Sequoia who lacked most of this context (I do think itâs a good thing that we give each other trust points for signalling and demonstrating commitments to EA).
The thing is, whilst SBF pledged ~all his wealth to EA causes, he only actually gave ~1-2% before the shit hit the fan. It seems doubtful that any significant amounts beyond this were ever donated to a non-SBF/âFTX-controlled separate legal entity (e.g. the Future Fund or FTX Foundation). That shouldâve raised some suspicions for those in the know. (Note this is speculation based on what is said in the Future Fund resignation post; would be good to actually hear from them about this).
The quote youâre citing is an argument for abject helplessness. We shouldnât be so confident in our own utter lack of capacity for risk management that we fund this work with $0.
I disagree with this. I think we should receive money from basically arbitrary sources, but I think that money should not come with associated status and reputation from within the community. If an old mafia boss wants to buy malaria nets, I think itâs much better if they can than if they cannot.
I think the key thing that went wrong was that in addition to Sam giving us money and receiving charitable efforts in return, he also received a lot of status and in many ways became one of the central faces of the EA community, and I think that was quite bad. I think we should have pushed back hard when Sam started being heavily associated with EA (and e.g. I think we should have not invited him to things like coordination forum, or had him speak at lots of EA events, etc.)
I guess it also depends on where the funding is going. If a bloody dictator gives a lot money to GiveDirectly or another charity that spends the money on physical goods(anti-malaria nets) which are obviously good, then itâs still debatable but thereâs less concern. But if the money is used in an outreach project to spread ideas then itâs a terrible outcome. Itâs similarly dangerous for research institutions.
Whatâs the specific mistake you think was made? Do you think e.g. âbeing very good at crypto /â trading /â marketsâ shouldnât be on its own sufficient to have status in the community?
I would be glad to see Putin have less resources and to see more bednets being distributed.
I do think the influence angle is key here. I think if Putin was doing a random lottery where he chose any organization in the world to receive a billion dollars from him, and it happened to be my organization, I think I should keep the money.
I think it gets trickier if we think about Putin giving money directly to me, because like, presumably he wants something in return. But if there was genuine proof he didnât want anything in-return, I would be glad to take it, especially if the alternative is that it fuels a war with Ukraine.
Right, I agree that itâs good to drain his resources and turn them into good things. The problem is that right now, our model is âstatus is a voluntary transaction.â In that model, when SBF, or in this example VP, donates, they are implicitly requesting status, which their recipients can choose to grant them or not.
I donât think granteesâeven whole movementsânecessarily have a choice in this matter. How would we have coordinated to avoid granting SBF status? Refused to have him on podcasts? But if he donates to EA, and a non-EA podcaster (maybe Tyler Cowen) asks him, SBF is free to talk about his connection and reasoning. Journalists can cover it however they see fit. People in EA, perhaps simply disagreeing, perhaps because they hope to curry favor with SBF, may self-interestedly grant status anyway. That wouldnât be very altruistic, but we should be seriously examining the degree to which self-interest motivates people to participate in EA right now.
So if we want to be able to accept donations from radioactive (or potentially radioactive) people, we need some story to explain how that avoids granting them status in ways that are out of our control. How do we avoid journalists, podcasters, a fraction of the EA community, and the donor themselves from constructing a narrative of the donor as a high-status EA figure?
I donât think granteesâeven whole movementsânecessarily have a choice in this matter. How would we have coordinated to avoid granting SBF status? Refused to have him on podcasts? But if he donates to EA, and a non-EA podcaster (maybe Tyler Cowen) asks him, SBF is free to talk about his connection and reasoning. Journalists can cover it however they see fit. People in EA, perhaps simply disagreeing, perhaps because they hope to curry favor with SBF, may self-interestedly grant status anyway. That wouldnât be very altruistic, but we should be seriously examining the degree to which self-interest motivates people to participate in EA right now.
I think my favorite version of this is something like âYou can buy our scrutiny and timeâ. Like, if you donate to EA, we will pay attention to you, and we will grill you in the comments section of our forum, and in some sense this is an opportunity for you to gain status, but itâs also an opportunity for you to lose a lot of status, if you donât hold yourself well in those situations.
I think a podcast with SBF where someone would have grilled him on his controversial stances would have been great. Indeed, I was actually planning to do a public debate with him in February where I was planning to bring up his reputation for lack of honesty and his involvement in politics that seemed pretty shady to me, but some parts of EA leadership actively requested that I donât do that, since it seemed too likely to explode somehow and reflect really badly on EAs image.
I also think repeatedly that we donât think he is a good figurehead of the EA community, not inviting him to coordination forum and other leadership events, etc. would have been good and possible.
Indeed, right now I am involved with talking to a bunch of people about similar situations, where we are associated with a bunch of AI capabilities companies and there are a bunch of people in policy that I donât want to support, but they are working on things that are relevant to us and that are useful to coordinate with (and sometimes give resources to). And I think we could just have a public statement being like âdespite the fact that we trade with OpenAI, we also think they are committing a terrible atrocity and we donât want you to think we support themâ. And I think this would help a lot, and doesnât seem that hard. And if they donât want to take the other side of that deal and only want to trade with us if we say that we think they are great, then we shouldnât trade with them.
I think a podcast with SBF where someone would have grilled him on his controversial stances would have been great. Indeed, I was actually planning to do a public debate with him in February where I was planning to bring up his reputation for lack of honesty and his involvement in politics that seemed pretty shady to me, but some parts of EA leadership actively requested that I donât do that, since it seemed too likely to explode somehow and reflect really badly on EAs image.
This is an issue with optimizing of image I have: You arenât able to speak out against a thought leader because theyâre successful, and EA optimizing for seeming good is how we got into this mess in the first place.
I support these actions, conditional on them becoming common knowledge community norms. However, itâs strictly less likely for us to trade with bad actors and project that we donât support them than it is for us to just trade with bad actors.
What if there were a norm in EA of not accepting large amounts of funding unless a third-party auditor of some sort has done a thorough review of the funderâs finances and found them to above-board? Obviously lots of variables in this proposal, but I think something like this is plausibly good and would be interested to hear pushback.
I donât know much about how this all works but how relevant do you think this point is?
[Edit: I donât think the OP had included the Eliezer tweet in the question when I originally posted this. My point is basically already covered in the OP now.]
Itâs a relevant point but I think we can reasonably expect EA leadership to do better at vetting megadonors than Sequoia due to (a) more context on the situation, e.g. EAs should have known more about SBFâs past than Sequoia and/âor could have found it out more easily via social and professional connections (b) more incentive to avoid downside risks, e.g. the SBF blowup matters a lot more for EAâs reputation than Sequoiaâs.
To be clear, this does not apply to charities receiving money from FTXFF, that is a separate question from EA leadership.
You expect the people being given free cash to do a better job of due diligence than the people handing someone a giant cash pile?
Not to mention that the Future Fund donations probably did more good for EA causes than the reputational damage is going to do harm to them (making the further assumption that this is actually net reputational damage, as opposed to a bunch of free coverage that pushes some people off and attracts some other people).
Also, to be pithy:
If we are so f*****g clever as to know what risks everyone else misses and how to avoid them, how come we didnât spot that one of our best and brightest was actually a massive fraudster
I havenât expected EAs to have any unusual skill at spotting risks.
EAs have been unusual at distinguishing risks based on their magnitude. The risks from FTX didnât look much like the risk of human extinction.
But half our resources to combat human extinction were at risk due to risks to FTX. Why didnât we take that more seriously.
And also the communityâs reputation to a very significant degree. It was arguably the biggest mistake EA has made thus far (or the biggest one that has become obvious â could imagine weâre making other mistakes that arenât yet obvious).
Do you think EAs should stop having opinions on extinction risks because they made a mistake in a different relevant domain (insufficient cynicism in the social realm)? I donât see the logic here.
I think a) and b) are good points. Although thereâs also c) itâs reasonable to give extra trust points to a member of the community whoâs just given you a not-insignificant part of their wealth to spend on charitable endeavours as you see fit.
Note that Iâm obviously not saying this implied SBF was super trustworthy on balance, just that itâs a reasonable consideration pushing in the other direction when making the comparison with Sequoia who lacked most of this context (I do think itâs a good thing that we give each other trust points for signalling and demonstrating commitments to EA).
The thing is, whilst SBF pledged ~all his wealth to EA causes, he only actually gave ~1-2% before the shit hit the fan. It seems doubtful that any significant amounts beyond this were ever donated to a non-SBF/âFTX-controlled separate legal entity (e.g. the Future Fund or FTX Foundation). That shouldâve raised some suspicions for those in the know. (Note this is speculation based on what is said in the Future Fund resignation post; would be good to actually hear from them about this).
The quote youâre citing is an argument for abject helplessness. We shouldnât be so confident in our own utter lack of capacity for risk management that we fund this work with $0.
I disagree with this. I think we should receive money from basically arbitrary sources, but I think that money should not come with associated status and reputation from within the community. If an old mafia boss wants to buy malaria nets, I think itâs much better if they can than if they cannot.
I think the key thing that went wrong was that in addition to Sam giving us money and receiving charitable efforts in return, he also received a lot of status and in many ways became one of the central faces of the EA community, and I think that was quite bad. I think we should have pushed back hard when Sam started being heavily associated with EA (and e.g. I think we should have not invited him to things like coordination forum, or had him speak at lots of EA events, etc.)
I guess it also depends on where the funding is going. If a bloody dictator gives a lot money to GiveDirectly or another charity that spends the money on physical goods(anti-malaria nets) which are obviously good, then itâs still debatable but thereâs less concern. But if the money is used in an outreach project to spread ideas then itâs a terrible outcome. Itâs similarly dangerous for research institutions.
Whatâs the specific mistake you think was made? Do you think e.g. âbeing very good at crypto /â trading /â marketsâ shouldnât be on its own sufficient to have status in the community?Edit: Answered elsewhere
âOld mafia don?â How about Vladimir Putin?
I tend to lean in your direction, but I think we should base this argument on the most radioactive relevant modern case.
I would be glad to see Putin have less resources and to see more bednets being distributed.
I do think the influence angle is key here. I think if Putin was doing a random lottery where he chose any organization in the world to receive a billion dollars from him, and it happened to be my organization, I think I should keep the money.
I think it gets trickier if we think about Putin giving money directly to me, because like, presumably he wants something in return. But if there was genuine proof he didnât want anything in-return, I would be glad to take it, especially if the alternative is that it fuels a war with Ukraine.
Right, I agree that itâs good to drain his resources and turn them into good things. The problem is that right now, our model is âstatus is a voluntary transaction.â In that model, when SBF, or in this example VP, donates, they are implicitly requesting status, which their recipients can choose to grant them or not.
I donât think granteesâeven whole movementsânecessarily have a choice in this matter. How would we have coordinated to avoid granting SBF status? Refused to have him on podcasts? But if he donates to EA, and a non-EA podcaster (maybe Tyler Cowen) asks him, SBF is free to talk about his connection and reasoning. Journalists can cover it however they see fit. People in EA, perhaps simply disagreeing, perhaps because they hope to curry favor with SBF, may self-interestedly grant status anyway. That wouldnât be very altruistic, but we should be seriously examining the degree to which self-interest motivates people to participate in EA right now.
So if we want to be able to accept donations from radioactive (or potentially radioactive) people, we need some story to explain how that avoids granting them status in ways that are out of our control. How do we avoid journalists, podcasters, a fraction of the EA community, and the donor themselves from constructing a narrative of the donor as a high-status EA figure?
I think my favorite version of this is something like âYou can buy our scrutiny and timeâ. Like, if you donate to EA, we will pay attention to you, and we will grill you in the comments section of our forum, and in some sense this is an opportunity for you to gain status, but itâs also an opportunity for you to lose a lot of status, if you donât hold yourself well in those situations.
I think a podcast with SBF where someone would have grilled him on his controversial stances would have been great. Indeed, I was actually planning to do a public debate with him in February where I was planning to bring up his reputation for lack of honesty and his involvement in politics that seemed pretty shady to me, but some parts of EA leadership actively requested that I donât do that, since it seemed too likely to explode somehow and reflect really badly on EAs image.
I also think repeatedly that we donât think he is a good figurehead of the EA community, not inviting him to coordination forum and other leadership events, etc. would have been good and possible.
Indeed, right now I am involved with talking to a bunch of people about similar situations, where we are associated with a bunch of AI capabilities companies and there are a bunch of people in policy that I donât want to support, but they are working on things that are relevant to us and that are useful to coordinate with (and sometimes give resources to). And I think we could just have a public statement being like âdespite the fact that we trade with OpenAI, we also think they are committing a terrible atrocity and we donât want you to think we support themâ. And I think this would help a lot, and doesnât seem that hard. And if they donât want to take the other side of that deal and only want to trade with us if we say that we think they are great, then we shouldnât trade with them.
This is an issue with optimizing of image I have: You arenât able to speak out against a thought leader because theyâre successful, and EA optimizing for seeming good is how we got into this mess in the first place.
I support these actions, conditional on them becoming common knowledge community norms. However, itâs strictly less likely for us to trade with bad actors and project that we donât support them than it is for us to just trade with bad actors.
This is actually the best practice in banks and publicly held corporations...