EA (ie mainly elite EAs) fucked up and have considerable responsibility over the FTX thing
EA also fucked up big time with the OpenAI board drama, in a way that blew up less badly than it could have, but reflects even worse on the state of elite EA than FTX does
Public investigations and post-mortems won’t help per se. What would help is a display of leadership that convincingly puts to bed any concern of similarly poor epistemics and practices taking place in the future
Wasn’t the OpenAI thing basically the opposite of the mistake with FTX though? With FTX people ignored what appears to have been a fair amount of evidence that a powerful, allegedly ethical businessperson was in fact shady. At OpenAI, people seem to have got (what they perceived as, but we’ve no strong evidence they were wrong) evidence, that a powerful, allegedly ethically motivated businessperson was in fact shady, so they learnt the lessons of FTX and tried to do something about it (and failed.)
I think that’s why it’s informative. If EA radically changes in response to the FTX crisis, then it could easily put itself in a worse position (leading to more negative consequences in the world).
The intrinsic problem appears to be in the quality of the governance, rather than a systematic error/blind-spot.
To be more clear, I am bringing the OpenAI drama up as it is instructive for highlighting what is and is not going wrong more generally. I don’t think the specifics of what went wrong with FTX point at the central thing that’s of concern. I think the key factor behind EA’s past and future failures come down to poor quality decision-making among those with the most influence, rather than the degree to which everybody is sensitive to someone’s shadiness.
(I’m assuming we agree FTX and the OpenAI drama were both failures, and that failures can happen even among groups of competent, moral people that act according to the expectations set for them.)
I don’t know what the cause of the poor decision-making is. Social norms preventing people from expressing disagreement, org structures, unclear responsibilities, conflicts of interests, lack of communication, low intellectual diversity — it could be one of these, a combination, or maybe something totally different. I think it should be figured out and resolved, though, if we are trying to change the world.
So, if there is an investigation, it should be part of a move to making sure EAs in positions of power will consistently handle difficult situations incredibly well (as opposed to just satisfying people’s needs for more specific explanations of what went wrong with FTX).
There are many ways in which EA can create or destroy value, and looking just at our eagerness to ‘do something’ in response to people being shady is a weirdly narrow metric to assess the movement on.
EDIT: would really appreciate someone saying what they disagree with
My take is:
EA (ie mainly elite EAs) fucked up and have considerable responsibility over the FTX thing
EA also fucked up big time with the OpenAI board drama, in a way that blew up less badly than it could have, but reflects even worse on the state of elite EA than FTX does
Public investigations and post-mortems won’t help per se. What would help is a display of leadership that convincingly puts to bed any concern of similarly poor epistemics and practices taking place in the future
Wasn’t the OpenAI thing basically the opposite of the mistake with FTX though? With FTX people ignored what appears to have been a fair amount of evidence that a powerful, allegedly ethical businessperson was in fact shady. At OpenAI, people seem to have got (what they perceived as, but we’ve no strong evidence they were wrong) evidence, that a powerful, allegedly ethically motivated businessperson was in fact shady, so they learnt the lessons of FTX and tried to do something about it (and failed.)
I think that’s why it’s informative. If EA radically changes in response to the FTX crisis, then it could easily put itself in a worse position (leading to more negative consequences in the world).
The intrinsic problem appears to be in the quality of the governance, rather than a systematic error/blind-spot.
To be more clear, I am bringing the OpenAI drama up as it is instructive for highlighting what is and is not going wrong more generally. I don’t think the specifics of what went wrong with FTX point at the central thing that’s of concern. I think the key factor behind EA’s past and future failures come down to poor quality decision-making among those with the most influence, rather than the degree to which everybody is sensitive to someone’s shadiness.
(I’m assuming we agree FTX and the OpenAI drama were both failures, and that failures can happen even among groups of competent, moral people that act according to the expectations set for them.)
I don’t know what the cause of the poor decision-making is. Social norms preventing people from expressing disagreement, org structures, unclear responsibilities, conflicts of interests, lack of communication, low intellectual diversity — it could be one of these, a combination, or maybe something totally different. I think it should be figured out and resolved, though, if we are trying to change the world.
So, if there is an investigation, it should be part of a move to making sure EAs in positions of power will consistently handle difficult situations incredibly well (as opposed to just satisfying people’s needs for more specific explanations of what went wrong with FTX).
There are many ways in which EA can create or destroy value, and looking just at our eagerness to ‘do something’ in response to people being shady is a weirdly narrow metric to assess the movement on.
EDIT: would really appreciate someone saying what they disagree with