I agree with this post in spirit, but disagree with your concrete examples. I mostly just don’t think that “expert” is actually a coherent category for these kinds of projects.
Respectively, I think that WWOTF probably did a great job re moral philosophy, but think it majorly underrates x risk esp AI. But this is neither an expert consensus, nor Will’s area of expertise. It also gives a bunch of takes about abolition, which is very much a history question, etc.
I think that Joseph’s report was pretty great, and very much the work that should be done by a philosopher. It was mostly disentangling, clarifying and distilling arguments that previous people (sometimes highly technical) had mostly made from fuzzy intuitions. I do not think that working in AI trains these skills. I think it gives a lot of intuitions about the capabilities of current systems, and some intuitions about future systems, but experts are often pretty bad at forecasting! Eg I’m not sure I can think of anyone who could eg qualitatively predict what GPT3 can do.
Ditto, I think that Ajeya”s work was an excellent, ambitious and interdisciplinary work. I can’t think of many experts where I expect them to have done a better job (not that I don’t think you can improve on the report, just that I don’t think the parts I would want improved are that dependent on specific expertise)
I agree with this post in spirit, but disagree with your concrete examples. I mostly just don’t think that “expert” is actually a coherent category for these kinds of projects.
Respectively, I think that WWOTF probably did a great job re moral philosophy, but think it majorly underrates x risk esp AI. But this is neither an expert consensus, nor Will’s area of expertise. It also gives a bunch of takes about abolition, which is very much a history question, etc.
I think that Joseph’s report was pretty great, and very much the work that should be done by a philosopher. It was mostly disentangling, clarifying and distilling arguments that previous people (sometimes highly technical) had mostly made from fuzzy intuitions. I do not think that working in AI trains these skills. I think it gives a lot of intuitions about the capabilities of current systems, and some intuitions about future systems, but experts are often pretty bad at forecasting! Eg I’m not sure I can think of anyone who could eg qualitatively predict what GPT3 can do.
Ditto, I think that Ajeya”s work was an excellent, ambitious and interdisciplinary work. I can’t think of many experts where I expect them to have done a better job (not that I don’t think you can improve on the report, just that I don’t think the parts I would want improved are that dependent on specific expertise)
You might be right. It is indeed harder to identify experts to lead on research projects which are very inter-disciplinary in nature.