To be more clear, I am bringing the OpenAI drama up as it is instructive for highlighting what is and is not going wrong more generally. I don’t think the specifics of what went wrong with FTX point at the central thing that’s of concern. I think the key factor behind EA’s past and future failures come down to poor quality decision-making among those with the most influence, rather than the degree to which everybody is sensitive to someone’s shadiness.
(I’m assuming we agree FTX and the OpenAI drama were both failures, and that failures can happen even among groups of competent, moral people that act according to the expectations set for them.)
I don’t know what the cause of the poor decision-making is. Social norms preventing people from expressing disagreement, org structures, unclear responsibilities, conflicts of interests, lack of communication, low intellectual diversity — it could be one of these, a combination, or maybe something totally different. I think it should be figured out and resolved, though, if we are trying to change the world.
So, if there is an investigation, it should be part of a move to making sure EAs in positions of power will consistently handle difficult situations incredibly well (as opposed to just satisfying people’s needs for more specific explanations of what went wrong with FTX).
There are many ways in which EA can create or destroy value, and looking just at our eagerness to ‘do something’ in response to people being shady is a weirdly narrow metric to assess the movement on.
EDIT: would really appreciate someone saying what they disagree with
To be more clear, I am bringing the OpenAI drama up as it is instructive for highlighting what is and is not going wrong more generally. I don’t think the specifics of what went wrong with FTX point at the central thing that’s of concern. I think the key factor behind EA’s past and future failures come down to poor quality decision-making among those with the most influence, rather than the degree to which everybody is sensitive to someone’s shadiness.
(I’m assuming we agree FTX and the OpenAI drama were both failures, and that failures can happen even among groups of competent, moral people that act according to the expectations set for them.)
I don’t know what the cause of the poor decision-making is. Social norms preventing people from expressing disagreement, org structures, unclear responsibilities, conflicts of interests, lack of communication, low intellectual diversity — it could be one of these, a combination, or maybe something totally different. I think it should be figured out and resolved, though, if we are trying to change the world.
So, if there is an investigation, it should be part of a move to making sure EAs in positions of power will consistently handle difficult situations incredibly well (as opposed to just satisfying people’s needs for more specific explanations of what went wrong with FTX).
There are many ways in which EA can create or destroy value, and looking just at our eagerness to ‘do something’ in response to people being shady is a weirdly narrow metric to assess the movement on.
EDIT: would really appreciate someone saying what they disagree with