I think the global argument is that power in EA should be deconcentrated/diffused across the board, and subjected to more oversight across the board, to reduce risk from its potential misuse. I dont think Zoe is suggesting that any actor should get a choice on how much power to lose or oversight to have. Could you say more about how adverse selection interacts with that approach?
Even if every actor in EA agreed to limit its power, we wouldn’t be able to limit the power of actors outside of EA. This is the adverse selection effect.
This means that we need to carefully consider the cost-benefit trade off in proposals to limit the power of groups. In some cases, ie. seeing how the FTX fiasco was a larger systematic risk, it’s clear that there’s a need for more oversight. In other cases, it’s more like the analogy of putting Frodo’s quest on hold until we’ve conducted an opinion survey of Middle Earth.
(Update: Upon reflection, this comment makes me sound like I’m more towards ‘just do stuff’ then I am. I think we need to recognise that we can’t assume someone is perfectly virtuous just because they’re an EA, but I also want us to retain the characteristics of a high trust community (and we have to check up on every little decision is a characteristic of a low trust community).
Thanks. That argument makes sense on the assumption that a given reform would reduce EA’s collective power as opposed to merely redistributing it within EA.
I think the global argument is that power in EA should be deconcentrated/diffused across the board, and subjected to more oversight across the board, to reduce risk from its potential misuse. I dont think Zoe is suggesting that any actor should get a choice on how much power to lose or oversight to have. Could you say more about how adverse selection interacts with that approach?
Even if every actor in EA agreed to limit its power, we wouldn’t be able to limit the power of actors outside of EA. This is the adverse selection effect.
This means that we need to carefully consider the cost-benefit trade off in proposals to limit the power of groups. In some cases, ie. seeing how the FTX fiasco was a larger systematic risk, it’s clear that there’s a need for more oversight. In other cases, it’s more like the analogy of putting Frodo’s quest on hold until we’ve conducted an opinion survey of Middle Earth.
(Update: Upon reflection, this comment makes me sound like I’m more towards ‘just do stuff’ then I am. I think we need to recognise that we can’t assume someone is perfectly virtuous just because they’re an EA, but I also want us to retain the characteristics of a high trust community (and we have to check up on every little decision is a characteristic of a low trust community).
Thanks. That argument makes sense on the assumption that a given reform would reduce EA’s collective power as opposed to merely redistributing it within EA.