My current sense is that there is no motivation to find an alternative because people mistakenly think it works fine enough and so there is no need to try to find something better (and also in the absence of an investigation and clear arguments about why the rumor thing doesn’t work, people probably think they can’t really be blamed if the strategy fails again)
Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).
Is that correct? If so, I’m interested in understanding why – usually if you want to do a thing, the best approach is to just do that thing.
It seems to me that a case study of how exactly FTX occurred, and where things failed, would be among one of the best things to use to figure out what thing to do instead.
Currently the majority of people who have an interest in this are blocked by not really knowing what worked and didn’t work in the FTX case, and so probably will have trouble arguing compellingly for any alternative, and also lack some of the most crucial data. My guess is you might have the relevant information from informal conversations, but most don’t.
I do think also just directly looking for an alternative seems good. I am not saying that doing an FTX investigation is literally the very best thing to do in the world, it just seems better than what I see EA leadership spending their time on instead. If you had the choice between “figure out a mechanism detecting and propagating information about future adversarial behavior” and “do an FTX investigation”, I would feel pretty great about both, and honestly don’t really know which one I would prefer. As far as I can tell neither of these things is seeing much effort invested into it.
Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by “not really knowing what worked and didn’t work in the FTX case” – even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldn’t rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]
I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like “the risk manager has some amount of formal authority” which aren’t true in EA.
(And to be clear: I think these are very big blockers! They just aren’t resolved by doing an investigation.)
Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!
Again, under the assumption that your goal is fraud detection.
It seems like a goal of ~”fraud detection” not further specified may be near the nadir of utility for an investigation.
If you go significantly narrower, then how EA managed (or didn’t manage) SBF fraud seems rather important to figuring out how to deal with the risk of similar fraudulent schemes in the future.[1]
If you go significantly broader (cf. Oli’s reference to “detecting and propagating information about future adversarial behavior”), the blockers you identify seem significantly less relevant, which may increase the expected value of an investigation.
My tentative guess is that it would be best to analyze potential courses of action in terms of their effects on the “EA immune system” at multiple points of specificity, not just close relations of a specific known pathogen (e.g., SBF-like schemes), a class of pathogens (e.g., “fraud”), or pathogens writ large (e.g., “future adversarial behavior”).
Given past EA involvement with crypto, and the base rate of not-too-subtle fraud in crypto, the risk of similar fraudulent schemes seems more than theoretical to me.
My current sense is that there is no motivation to find an alternative because people mistakenly think it works fine enough and so there is no need to try to find something better (and also in the absence of an investigation and clear arguments about why the rumor thing doesn’t work, people probably think they can’t really be blamed if the strategy fails again)
Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).
Is that correct? If so, I’m interested in understanding why – usually if you want to do a thing, the best approach is to just do that thing.
It seems to me that a case study of how exactly FTX occurred, and where things failed, would be among one of the best things to use to figure out what thing to do instead.
Currently the majority of people who have an interest in this are blocked by not really knowing what worked and didn’t work in the FTX case, and so probably will have trouble arguing compellingly for any alternative, and also lack some of the most crucial data. My guess is you might have the relevant information from informal conversations, but most don’t.
I do think also just directly looking for an alternative seems good. I am not saying that doing an FTX investigation is literally the very best thing to do in the world, it just seems better than what I see EA leadership spending their time on instead. If you had the choice between “figure out a mechanism detecting and propagating information about future adversarial behavior” and “do an FTX investigation”, I would feel pretty great about both, and honestly don’t really know which one I would prefer. As far as I can tell neither of these things is seeing much effort invested into it.
Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by “not really knowing what worked and didn’t work in the FTX case” – even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldn’t rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]
I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like “the risk manager has some amount of formal authority” which aren’t true in EA.
(And to be clear: I think these are very big blockers! They just aren’t resolved by doing an investigation.)
Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!
Again, under the assumption that your goal is fraud detection. Investigations may be more or less useful for other goals.
It seems like a goal of ~”fraud detection” not further specified may be near the nadir of utility for an investigation.
If you go significantly narrower, then how EA managed (or didn’t manage) SBF fraud seems rather important to figuring out how to deal with the risk of similar fraudulent schemes in the future.[1]
If you go significantly broader (cf. Oli’s reference to “detecting and propagating information about future adversarial behavior”), the blockers you identify seem significantly less relevant, which may increase the expected value of an investigation.
My tentative guess is that it would be best to analyze potential courses of action in terms of their effects on the “EA immune system” at multiple points of specificity, not just close relations of a specific known pathogen (e.g., SBF-like schemes), a class of pathogens (e.g., “fraud”), or pathogens writ large (e.g., “future adversarial behavior”).
Given past EA involvement with crypto, and the base rate of not-too-subtle fraud in crypto, the risk of similar fraudulent schemes seems more than theoretical to me.