I’ve wondered lately whether there could be a future for crafty philosophers to carve out a niche in EA. Many economists of late have been able to utilize their skills to do things that, on their face, do not appear to be economics as traditionally conceived. There seems to be a similar opportunity in philosophy, given the skill set that philosophers bring to the table. However, I think the greatest difficulty would be the fact that philosophy (and philosophers) have typically been interested in advancing specific discussions about particular philosophical problems. In a sense, the goal is to discover what’s true. In that light, EA simply isn’t as interesting to many philosophers. It doesn’t seem to pose great unanswered philosophical questions. The premise of EA is relatively simple, and it seems that, given many philosophers sympathies toward consequentialism, it may be met with wide acceptance. However, that doesn’t leave anything to discuss philosophically, which is an obvious problem for philosophy as traditionally conceived. However, I see no good reason why at least some philosophers should not branch out in their line of work and become interested in a sort of “everyman” ethics. This wouldn’t involve as much discussion about what is right, but would instead involve persuasion. I certainly think there’s room in the field for it, and given the fact that many departments are on a sort of “justification treadmill” in having to justify their existence to the heads of their university and the general public, it may be exactly what the field needs.
I have actually heard some moral philosophers lament about how when people get sick, they call a doctor, when their car has problems, they call a mechanic, etc. When they have a moral predicament, however, no one calls a moral philosopher. It seems to me that EA is a perfect platform to be advanced by philosophers, and that at least some philosophers might welcome the opportunity. The question that needs to be answered is whether this can be done by a philosopher who is trying to build a career, or if it must be relegated to guys like Singer who already have successful careers.
I understand The Centre for Effective Altruism (TGPP, GWWC, e.t.c.) does and has done a lot of philosophical and methodological research, so you might want to talk to them.
My hunch is that most useful philosophcal discussion is already being had by FHI and that there are going to be lower hanging fruit elsewhere. It’s not just Bostrom there who is smart. People like Stuart Armstrong and Toby Ord, and also Will MacAskill, who is affiliated with the larger Oxford university… these kind of people think similarly to most EAs and would think of most of the things that I would think of, if I was in that field.
So I think my competitive advantage has to lie in some other skill that people like Toby and Will can’t easily get. This means that technical fields, including machine learning and genomics are now a lot more exciting to me.
I’m not sure how much I believe this reasoning. FHI does a great job attacking neglected problems, but they are a tiny number of people in absolute terms, and there are a lot of important questions they’re not addressing.
That’s not to say that your competitive advantage doesn’t include technical skills, but I’m not sure that the presence of a handful of people could reasonably push the balance that far (especially as there are also several people with EA sympathies in a variety of technical fields).
There are a lot of questions in every field that are not being addressed by EAs, and I would hardly single philosophy out as more important than the others.
Whatever one says about the fact that philosophical investigation has spawned most of the fields we now know, or that the principles of clear thinking depend on it, this doesn’t imply that we need more of it in EA at the current margin. Rather, it’s initially plausible that we need less of it, since Nick Beckstead has left philosophy, and about half of FHI staff time seems to be going to more empirical issues. In developing a priori or philosophical thinking about existential risk, there are only so many ways that one can recapitulate existential risk and astronomical-waste related arguments.. Eventually, one must interact with evidence. For young philosophers, problems in prevailing philosophical thought, including elements of linguistic philosophy, and anti-empirical tendencies, will make training and career progression difficult.
It seems more promising to try to evaluate what needs to be done practically to build better foresight into our political systems, and to consider which safety devices we can engineer into risky emerging technologies. Once some such endeavours are prioritised and then prototyped, there will be significant roles for coalition-building and outreach to relevant scientists who might contribute to safety and foresight efforts.
Concretely, if your research agenda was about a priori philosophy, and did not include prioritisation or political research, then I don’t think it would be the highest impact.
I basically agree with all of this. :) I was just sceptical about the expressed reasoning, and I think it’s often worth following up in such cases as it can uncover insights I was missing.
I’ve wondered lately whether there could be a future for crafty philosophers to carve out a niche in EA. Many economists of late have been able to utilize their skills to do things that, on their face, do not appear to be economics as traditionally conceived. There seems to be a similar opportunity in philosophy, given the skill set that philosophers bring to the table. However, I think the greatest difficulty would be the fact that philosophy (and philosophers) have typically been interested in advancing specific discussions about particular philosophical problems. In a sense, the goal is to discover what’s true. In that light, EA simply isn’t as interesting to many philosophers. It doesn’t seem to pose great unanswered philosophical questions. The premise of EA is relatively simple, and it seems that, given many philosophers sympathies toward consequentialism, it may be met with wide acceptance. However, that doesn’t leave anything to discuss philosophically, which is an obvious problem for philosophy as traditionally conceived. However, I see no good reason why at least some philosophers should not branch out in their line of work and become interested in a sort of “everyman” ethics. This wouldn’t involve as much discussion about what is right, but would instead involve persuasion. I certainly think there’s room in the field for it, and given the fact that many departments are on a sort of “justification treadmill” in having to justify their existence to the heads of their university and the general public, it may be exactly what the field needs.
I have actually heard some moral philosophers lament about how when people get sick, they call a doctor, when their car has problems, they call a mechanic, etc. When they have a moral predicament, however, no one calls a moral philosopher. It seems to me that EA is a perfect platform to be advanced by philosophers, and that at least some philosophers might welcome the opportunity. The question that needs to be answered is whether this can be done by a philosopher who is trying to build a career, or if it must be relegated to guys like Singer who already have successful careers.
I understand The Centre for Effective Altruism (TGPP, GWWC, e.t.c.) does and has done a lot of philosophical and methodological research, so you might want to talk to them.
My hunch is that most useful philosophcal discussion is already being had by FHI and that there are going to be lower hanging fruit elsewhere. It’s not just Bostrom there who is smart. People like Stuart Armstrong and Toby Ord, and also Will MacAskill, who is affiliated with the larger Oxford university… these kind of people think similarly to most EAs and would think of most of the things that I would think of, if I was in that field.
So I think my competitive advantage has to lie in some other skill that people like Toby and Will can’t easily get. This means that technical fields, including machine learning and genomics are now a lot more exciting to me.
I’m not sure how much I believe this reasoning. FHI does a great job attacking neglected problems, but they are a tiny number of people in absolute terms, and there are a lot of important questions they’re not addressing.
That’s not to say that your competitive advantage doesn’t include technical skills, but I’m not sure that the presence of a handful of people could reasonably push the balance that far (especially as there are also several people with EA sympathies in a variety of technical fields).
There are a lot of questions in every field that are not being addressed by EAs, and I would hardly single philosophy out as more important than the others.
Whatever one says about the fact that philosophical investigation has spawned most of the fields we now know, or that the principles of clear thinking depend on it, this doesn’t imply that we need more of it in EA at the current margin. Rather, it’s initially plausible that we need less of it, since Nick Beckstead has left philosophy, and about half of FHI staff time seems to be going to more empirical issues. In developing a priori or philosophical thinking about existential risk, there are only so many ways that one can recapitulate existential risk and astronomical-waste related arguments.. Eventually, one must interact with evidence. For young philosophers, problems in prevailing philosophical thought, including elements of linguistic philosophy, and anti-empirical tendencies, will make training and career progression difficult.
It seems more promising to try to evaluate what needs to be done practically to build better foresight into our political systems, and to consider which safety devices we can engineer into risky emerging technologies. Once some such endeavours are prioritised and then prototyped, there will be significant roles for coalition-building and outreach to relevant scientists who might contribute to safety and foresight efforts.
Concretely, if your research agenda was about a priori philosophy, and did not include prioritisation or political research, then I don’t think it would be the highest impact.
I basically agree with all of this. :) I was just sceptical about the expressed reasoning, and I think it’s often worth following up in such cases as it can uncover insights I was missing.