Four things I particularly liked about this post were that:
It’s fairly concise
It has a pretty high degree of reasoning transparency, including transparency about what methods you used and why, and a link to your spreadsheet with the raw data
It uses historical case studies in quite a structured way and with (as mentioned above) a lot of methodological transparency
I’m not aware of many other posts on the Forum that do that, and I suspect this is an approach that should be taken more often
This post highlights interesting and potentially decision-relevant hypotheses about what people working on AI governance should considerdoing
E.g., that people working on AI governance should consider:
Working hard on making the risks seem more immediate (or something like that)
Drawing attention to the possibility that the government might step in if self-governance doesn’t occur
Acting soon (see also) and/or focusing on just the AGI community specifically, to avoid having to coordinate many people
Focusing on journals and public-facing anchor firms
Working hard to get several well-credentialed people on board, without worrying too much about having some well-credentialed people arguing “against us”
This post doesn’t seem to provide very strong evidence for any of those hypotheses, but nor do you claim it does, and I think it’s useful to merely highlight the hypotheses as warranting further investigation
I think I’d expect much of the value of analysing historical case studies to come from highlighting hypotheses for further consideration, rather than from strongly supporting some highlighted hypotheses relative to others
Thanks for this post!
Four things I particularly liked about this post were that:
It’s fairly concise
It has a pretty high degree of reasoning transparency, including transparency about what methods you used and why, and a link to your spreadsheet with the raw data
It uses historical case studies in quite a structured way and with (as mentioned above) a lot of methodological transparency
I’m not aware of many other posts on the Forum that do that, and I suspect this is an approach that should be taken more often
See also posts with the History tag
This post highlights interesting and potentially decision-relevant hypotheses about what people working on AI governance should consider doing
E.g., that people working on AI governance should consider:
Working hard on making the risks seem more immediate (or something like that)
Drawing attention to the possibility that the government might step in if self-governance doesn’t occur
Acting soon (see also) and/or focusing on just the AGI community specifically, to avoid having to coordinate many people
Focusing on journals and public-facing anchor firms
Working hard to get several well-credentialed people on board, without worrying too much about having some well-credentialed people arguing “against us”
This post doesn’t seem to provide very strong evidence for any of those hypotheses, but nor do you claim it does, and I think it’s useful to merely highlight the hypotheses as warranting further investigation
I think I’d expect much of the value of analysing historical case studies to come from highlighting hypotheses for further consideration, rather than from strongly supporting some highlighted hypotheses relative to others