I agree if for CFAR you are looking at the metric of how rational their alumni are. If you instead look at CFAR as a funnel for people working on AI risk, the “evidence base” seems clearer.
Sure, I was pointing to the evidence base for the techniques taught by CFAR & other rationality training programs.
CFAR could be effective at recruiting people into AI risk due to Schelling-point dynamics, without the particular techniques it teaches being efficacious. (I’m not sure that’s true, just pointing out an orthogonality here.)
Sure, I was pointing to the evidence base for the techniques taught by CFAR & other rationality training programs.
CFAR could be effective at recruiting people into AI risk due to Schelling-point dynamics, without the particular techniques it teaches being efficacious. (I’m not sure that’s true, just pointing out an orthogonality here.)