In my impression, SBF thought he was doing an ‘unpalatable’ but right thing given the calculations (and his epistemic immodesty). Promoting a central meme in EA like “naïve calculations like this are too dangerous and too fallible” might solve a lot of the issue. I think dangerously-optimize-y people in EA are already updating in this direction as a result of FTX. Before FTX, being “hardcore” and doing naïve calculations was seen as cool sometimes. If we correct hard for this right now, it may be less of an issue in the future.
2 main caveats:
The whole “don’t do naïve calculations” idea is quite complex and not easy to communicate. This may make it hard to correct for it.
The movements of the memes of a space as large and complex as EA are probably hard to predict. All sorts of crazy things might happen and I have no clue. Like for example there could be a new counterculture part of EA that becomes super dangerously-optimize-y. (But at least they would face more of an uphill battle in this world.)
Yes I agree that could be a good scenario to emerge from this – a very salient example of this kind of thinking going wrong is one of the most helpful things to convince people to stop doing it.
Optimistic note with low confidence:
In my impression, SBF thought he was doing an ‘unpalatable’ but right thing given the calculations (and his epistemic immodesty). Promoting a central meme in EA like “naïve calculations like this are too dangerous and too fallible” might solve a lot of the issue. I think dangerously-optimize-y people in EA are already updating in this direction as a result of FTX. Before FTX, being “hardcore” and doing naïve calculations was seen as cool sometimes. If we correct hard for this right now, it may be less of an issue in the future.
2 main caveats:
The whole “don’t do naïve calculations” idea is quite complex and not easy to communicate. This may make it hard to correct for it.
The movements of the memes of a space as large and complex as EA are probably hard to predict. All sorts of crazy things might happen and I have no clue. Like for example there could be a new counterculture part of EA that becomes super dangerously-optimize-y. (But at least they would face more of an uphill battle in this world.)
Yes I agree that could be a good scenario to emerge from this – a very salient example of this kind of thinking going wrong is one of the most helpful things to convince people to stop doing it.