I suspect a lot of the “very best” ideas in terms of which things are ex ante the best to do, if we don’t look at other things in the space (including things not currently done), will look very similar to each other.
Like 10 extremely similar AI alignment proposals.
So I’d expect any list to have a lot of regularization for uniqueness/side constraint optimizations, rather than thinking of the FTX project ideas list as a ranked list of the most important x-risk reducing projects on the margin. Arguably, the latter ought to be closer to how altruistic individuals should be optimizing for what projects to do, after adjusting for personal fit
I suspect a lot of the “very best” ideas in terms of which things are ex ante the best to do, if we don’t look at other things in the space (including things not currently done), will look very similar to each other.
Like 10 extremely similar AI alignment proposals.
So I’d expect any list to have a lot of regularization for uniqueness/side constraint optimizations, rather than thinking of the FTX project ideas list as a ranked list of the most important x-risk reducing projects on the margin. Arguably, the latter ought to be closer to how altruistic individuals should be optimizing for what projects to do, after adjusting for personal fit