There are a lot of questions in every field that are not being addressed by EAs, and I would hardly single philosophy out as more important than the others.
Whatever one says about the fact that philosophical investigation has spawned most of the fields we now know, or that the principles of clear thinking depend on it, this doesn’t imply that we need more of it in EA at the current margin. Rather, it’s initially plausible that we need less of it, since Nick Beckstead has left philosophy, and about half of FHI staff time seems to be going to more empirical issues. In developing a priori or philosophical thinking about existential risk, there are only so many ways that one can recapitulate existential risk and astronomical-waste related arguments.. Eventually, one must interact with evidence. For young philosophers, problems in prevailing philosophical thought, including elements of linguistic philosophy, and anti-empirical tendencies, will make training and career progression difficult.
It seems more promising to try to evaluate what needs to be done practically to build better foresight into our political systems, and to consider which safety devices we can engineer into risky emerging technologies. Once some such endeavours are prioritised and then prototyped, there will be significant roles for coalition-building and outreach to relevant scientists who might contribute to safety and foresight efforts.
Concretely, if your research agenda was about a priori philosophy, and did not include prioritisation or political research, then I don’t think it would be the highest impact.
I basically agree with all of this. :) I was just sceptical about the expressed reasoning, and I think it’s often worth following up in such cases as it can uncover insights I was missing.
There are a lot of questions in every field that are not being addressed by EAs, and I would hardly single philosophy out as more important than the others.
Whatever one says about the fact that philosophical investigation has spawned most of the fields we now know, or that the principles of clear thinking depend on it, this doesn’t imply that we need more of it in EA at the current margin. Rather, it’s initially plausible that we need less of it, since Nick Beckstead has left philosophy, and about half of FHI staff time seems to be going to more empirical issues. In developing a priori or philosophical thinking about existential risk, there are only so many ways that one can recapitulate existential risk and astronomical-waste related arguments.. Eventually, one must interact with evidence. For young philosophers, problems in prevailing philosophical thought, including elements of linguistic philosophy, and anti-empirical tendencies, will make training and career progression difficult.
It seems more promising to try to evaluate what needs to be done practically to build better foresight into our political systems, and to consider which safety devices we can engineer into risky emerging technologies. Once some such endeavours are prioritised and then prototyped, there will be significant roles for coalition-building and outreach to relevant scientists who might contribute to safety and foresight efforts.
Concretely, if your research agenda was about a priori philosophy, and did not include prioritisation or political research, then I don’t think it would be the highest impact.
I basically agree with all of this. :) I was just sceptical about the expressed reasoning, and I think it’s often worth following up in such cases as it can uncover insights I was missing.