My hunch is that most useful philosophcal discussion is already being had by FHI and that there are going to be lower hanging fruit elsewhere. It’s not just Bostrom there who is smart. People like Stuart Armstrong and Toby Ord, and also Will MacAskill, who is affiliated with the larger Oxford university… these kind of people think similarly to most EAs and would think of most of the things that I would think of, if I was in that field.
So I think my competitive advantage has to lie in some other skill that people like Toby and Will can’t easily get. This means that technical fields, including machine learning and genomics are now a lot more exciting to me.
I’m not sure how much I believe this reasoning. FHI does a great job attacking neglected problems, but they are a tiny number of people in absolute terms, and there are a lot of important questions they’re not addressing.
That’s not to say that your competitive advantage doesn’t include technical skills, but I’m not sure that the presence of a handful of people could reasonably push the balance that far (especially as there are also several people with EA sympathies in a variety of technical fields).
There are a lot of questions in every field that are not being addressed by EAs, and I would hardly single philosophy out as more important than the others.
Whatever one says about the fact that philosophical investigation has spawned most of the fields we now know, or that the principles of clear thinking depend on it, this doesn’t imply that we need more of it in EA at the current margin. Rather, it’s initially plausible that we need less of it, since Nick Beckstead has left philosophy, and about half of FHI staff time seems to be going to more empirical issues. In developing a priori or philosophical thinking about existential risk, there are only so many ways that one can recapitulate existential risk and astronomical-waste related arguments.. Eventually, one must interact with evidence. For young philosophers, problems in prevailing philosophical thought, including elements of linguistic philosophy, and anti-empirical tendencies, will make training and career progression difficult.
It seems more promising to try to evaluate what needs to be done practically to build better foresight into our political systems, and to consider which safety devices we can engineer into risky emerging technologies. Once some such endeavours are prioritised and then prototyped, there will be significant roles for coalition-building and outreach to relevant scientists who might contribute to safety and foresight efforts.
Concretely, if your research agenda was about a priori philosophy, and did not include prioritisation or political research, then I don’t think it would be the highest impact.
I basically agree with all of this. :) I was just sceptical about the expressed reasoning, and I think it’s often worth following up in such cases as it can uncover insights I was missing.
My hunch is that most useful philosophcal discussion is already being had by FHI and that there are going to be lower hanging fruit elsewhere. It’s not just Bostrom there who is smart. People like Stuart Armstrong and Toby Ord, and also Will MacAskill, who is affiliated with the larger Oxford university… these kind of people think similarly to most EAs and would think of most of the things that I would think of, if I was in that field.
So I think my competitive advantage has to lie in some other skill that people like Toby and Will can’t easily get. This means that technical fields, including machine learning and genomics are now a lot more exciting to me.
I’m not sure how much I believe this reasoning. FHI does a great job attacking neglected problems, but they are a tiny number of people in absolute terms, and there are a lot of important questions they’re not addressing.
That’s not to say that your competitive advantage doesn’t include technical skills, but I’m not sure that the presence of a handful of people could reasonably push the balance that far (especially as there are also several people with EA sympathies in a variety of technical fields).
There are a lot of questions in every field that are not being addressed by EAs, and I would hardly single philosophy out as more important than the others.
Whatever one says about the fact that philosophical investigation has spawned most of the fields we now know, or that the principles of clear thinking depend on it, this doesn’t imply that we need more of it in EA at the current margin. Rather, it’s initially plausible that we need less of it, since Nick Beckstead has left philosophy, and about half of FHI staff time seems to be going to more empirical issues. In developing a priori or philosophical thinking about existential risk, there are only so many ways that one can recapitulate existential risk and astronomical-waste related arguments.. Eventually, one must interact with evidence. For young philosophers, problems in prevailing philosophical thought, including elements of linguistic philosophy, and anti-empirical tendencies, will make training and career progression difficult.
It seems more promising to try to evaluate what needs to be done practically to build better foresight into our political systems, and to consider which safety devices we can engineer into risky emerging technologies. Once some such endeavours are prioritised and then prototyped, there will be significant roles for coalition-building and outreach to relevant scientists who might contribute to safety and foresight efforts.
Concretely, if your research agenda was about a priori philosophy, and did not include prioritisation or political research, then I don’t think it would be the highest impact.
I basically agree with all of this. :) I was just sceptical about the expressed reasoning, and I think it’s often worth following up in such cases as it can uncover insights I was missing.