I think that whether fraud was committed or not, it seems to me now even clearer than before that we should never allow it, and we should make it even more of a priority to speak out against evil.
Just to be clear, I think “never commit fraud” is not a good ethical guideline to take away from this (both in that it isn’t learning enough from this situation, and in the sense that there are a lot of situations where you want to do fraud-adjacent things that are actually the ethical thing to do), as I’ve tried to argue in various other places on the forum. I think I would be quite sad if that is the primary lesson we take away from this.
I do think there is something important in the “speak out against evil” direction, and that’s the direction I am most interested in exploring.
The EA community is like 20k people, I find it surprising to judge whether e.g. ARC (I assume you’re more on the AI Safety side) would be more or less worthy of support based on whether <~20 people stole money.
I think the situation with OpenAI is quite analogous to the situation with FTX in terms of its harms for the world and the EA community’s involvement, and sadly I do think Paul has contributed substantially to the role that OpenAI has played in the EA ecosystem, so that’s a concrete way in which I think the lessons we learn here have a direct relevance to how I relate to ARC. I also think I feel quite similar about the Anthropic situation.
My support for EA is not conditional on nobody in EA being blameworthy. I am part of EA in order to improve the world. If EA makes the world worse, I don’t want to invest in it, independently of whether any specific individual can clearly be blamed for anything bad. In as much as we give rise to institutions like FTX and OpenAI, it really seems like we should change how we operate, or cease existing, and I do think the whole EA thing seemed quite load-bearing for both OpenAI and FTX coming into existence.
and before this scandal I don’t think many were including SBF (but indeed he was wrongly seen as a role model).
I think it would have been quite weird to not include SBF in “EA Leadership” last year. It was pretty clear he was doing a lot of leading, and he was invited to all the relevant events I can think of.
Just to be clear, I think “never commit fraud” is not a good ethical guideline to take away from this (both in that it isn’t learning enough from this situation, and in the sense that there are a lot of situations where you want to do fraud-adjacent things that are actually the ethical thing to do), as I’ve tried to argue in various other places on the forum. I think I would be quite sad if that is the primary lesson we take away from this.
I do think there is something important in the “speak out against evil” direction, and that’s the direction I am most interested in exploring.
I think the situation with OpenAI is quite analogous to the situation with FTX in terms of its harms for the world and the EA community’s involvement, and sadly I do think Paul has contributed substantially to the role that OpenAI has played in the EA ecosystem, so that’s a concrete way in which I think the lessons we learn here have a direct relevance to how I relate to ARC. I also think I feel quite similar about the Anthropic situation.
My support for EA is not conditional on nobody in EA being blameworthy. I am part of EA in order to improve the world. If EA makes the world worse, I don’t want to invest in it, independently of whether any specific individual can clearly be blamed for anything bad. In as much as we give rise to institutions like FTX and OpenAI, it really seems like we should change how we operate, or cease existing, and I do think the whole EA thing seemed quite load-bearing for both OpenAI and FTX coming into existence.
I think it would have been quite weird to not include SBF in “EA Leadership” last year. It was pretty clear he was doing a lot of leading, and he was invited to all the relevant events I can think of.