Many people have to make plans about the future of the EA community, as well as the future of their own organizations who might have received FTX funding.
Things like this seem to have a decent bearing on the total amount of clawbacks as well as the actual ethical lines that were crossed in the process of FTX, which seem highly relevant to me for learning from this mess.
I think a world where you loaned out a bunch of customer deposits who had explicitly said âyes, you can lend out my customer depositsâ is very different than a world where you stole a bunch of peopleâs money. Both are bad, but I think they imply pretty different levels of wrongdoing.
I think given the speed of lawsuits in cases like this we might not have much clearer information for many years, and at least I have many decisions to make that cannot wait a year that relate to things like this.
(Just to be concrete, at least my personal continued involvement in the EA community is pretty contingent on the details of what happened here, e.g. the degree to which the EA community was partially responsible for this mess, the degree to which it seems likely to learn from its mistakes, the ways the rest of the world will relate to the EA community after this, so in some sense most of my career feels currently on hold, so finding out more stuff earlier has very high information value for me)
Iâm really surprised to read this, thank you for sharing!
the actual ethical lines that were crossed in the process of FTX, which seem highly relevant to me for learning from this mess.
I think that whether fraud was committed or not, it seems to me now even clearer than before that we should never allow it, and we should make it even more of a priority to speak out against evil. Much like whether COVID actually came from a lab or not wouldnât change too much our attitude towards biosecurity, given that a significant probability that it came from a lab already justifies being very careful about that risk in the future.
my personal continued involvement in the EA community is pretty contingent on the details of what happened here, e.g. the degree to which the EA community was partially responsible for this mess
The EA community is like 20k people, I find it surprising to judge whether e.g. ARC (I assume youâre more on the AI Safety side) would be more or less worthy of support based on whether <~20 people stole money. Even âEA leadershipâ is a lot of people, and before this scandal I donât think many were including SBF (but indeed he was wrongly seen as a role model).
If it turned out that SBF indeed stole money, in what different ways would you try to do the most good?
I think that whether fraud was committed or not, it seems to me now even clearer than before that we should never allow it, and we should make it even more of a priority to speak out against evil.
Just to be clear, I think ânever commit fraudâ is not a good ethical guideline to take away from this (both in that it isnât learning enough from this situation, and in the sense that there are a lot of situations where you want to do fraud-adjacent things that are actually the ethical thing to do), as Iâve tried to argue in various other places on the forum. I think I would be quite sad if that is the primary lesson we take away from this.
I do think there is something important in the âspeak out against evilâ direction, and thatâs the direction I am most interested in exploring.
The EA community is like 20k people, I find it surprising to judge whether e.g. ARC (I assume youâre more on the AI Safety side) would be more or less worthy of support based on whether <~20 people stole money.
I think the situation with OpenAI is quite analogous to the situation with FTX in terms of its harms for the world and the EA communityâs involvement, and sadly I do think Paul has contributed substantially to the role that OpenAI has played in the EA ecosystem, so thatâs a concrete way in which I think the lessons we learn here have a direct relevance to how I relate to ARC. I also think I feel quite similar about the Anthropic situation.
My support for EA is not conditional on nobody in EA being blameworthy. I am part of EA in order to improve the world. If EA makes the world worse, I donât want to invest in it, independently of whether any specific individual can clearly be blamed for anything bad. In as much as we give rise to institutions like FTX and OpenAI, it really seems like we should change how we operate, or cease existing, and I do think the whole EA thing seemed quite load-bearing for both OpenAI and FTX coming into existence.
and before this scandal I donât think many were including SBF (but indeed he was wrongly seen as a role model).
I think it would have been quite weird to not include SBF in âEA Leadershipâ last year. It was pretty clear he was doing a lot of leading, and he was invited to all the relevant events I can think of.
None of this is decision-relevant for me and waiting a few months or years makes complete sense, but alas âinterestingâ != âdecision-relevant.â
I am curious as to how this would be decision-relevant at this moment.
It seems to me that thereâs a lot of information that will surface in the next months and years (random example: context behind the FTX US President stepping down the day before a suspicious (in hindsight) transaction).
To me, the best thing to do seems to be to just wait for the judicial proceedings and for information to surface.
Many people have to make plans about the future of the EA community, as well as the future of their own organizations who might have received FTX funding.
Things like this seem to have a decent bearing on the total amount of clawbacks as well as the actual ethical lines that were crossed in the process of FTX, which seem highly relevant to me for learning from this mess.
I think a world where you loaned out a bunch of customer deposits who had explicitly said âyes, you can lend out my customer depositsâ is very different than a world where you stole a bunch of peopleâs money. Both are bad, but I think they imply pretty different levels of wrongdoing.
I think given the speed of lawsuits in cases like this we might not have much clearer information for many years, and at least I have many decisions to make that cannot wait a year that relate to things like this.
(Just to be concrete, at least my personal continued involvement in the EA community is pretty contingent on the details of what happened here, e.g. the degree to which the EA community was partially responsible for this mess, the degree to which it seems likely to learn from its mistakes, the ways the rest of the world will relate to the EA community after this, so in some sense most of my career feels currently on hold, so finding out more stuff earlier has very high information value for me)
Iâm really surprised to read this, thank you for sharing!
I think that whether fraud was committed or not, it seems to me now even clearer than before that we should never allow it, and we should make it even more of a priority to speak out against evil.
Much like whether COVID actually came from a lab or not wouldnât change too much our attitude towards biosecurity, given that a significant probability that it came from a lab already justifies being very careful about that risk in the future.
The EA community is like 20k people, I find it surprising to judge whether e.g. ARC (I assume youâre more on the AI Safety side) would be more or less worthy of support based on whether <~20 people stole money. Even âEA leadershipâ is a lot of people, and before this scandal I donât think many were including SBF (but indeed he was wrongly seen as a role model).
If it turned out that SBF indeed stole money, in what different ways would you try to do the most good?
Just to be clear, I think ânever commit fraudâ is not a good ethical guideline to take away from this (both in that it isnât learning enough from this situation, and in the sense that there are a lot of situations where you want to do fraud-adjacent things that are actually the ethical thing to do), as Iâve tried to argue in various other places on the forum. I think I would be quite sad if that is the primary lesson we take away from this.
I do think there is something important in the âspeak out against evilâ direction, and thatâs the direction I am most interested in exploring.
I think the situation with OpenAI is quite analogous to the situation with FTX in terms of its harms for the world and the EA communityâs involvement, and sadly I do think Paul has contributed substantially to the role that OpenAI has played in the EA ecosystem, so thatâs a concrete way in which I think the lessons we learn here have a direct relevance to how I relate to ARC. I also think I feel quite similar about the Anthropic situation.
My support for EA is not conditional on nobody in EA being blameworthy. I am part of EA in order to improve the world. If EA makes the world worse, I donât want to invest in it, independently of whether any specific individual can clearly be blamed for anything bad. In as much as we give rise to institutions like FTX and OpenAI, it really seems like we should change how we operate, or cease existing, and I do think the whole EA thing seemed quite load-bearing for both OpenAI and FTX coming into existence.
I think it would have been quite weird to not include SBF in âEA Leadershipâ last year. It was pretty clear he was doing a lot of leading, and he was invited to all the relevant events I can think of.
None of this is decision-relevant for me and waiting a few months or years makes complete sense, but alas âinterestingâ != âdecision-relevant.â