Too few EAs are doing object-level work, (excluding donations), and this can be helped by doing empirical research around possible actions. One can note that there were not enough people interested in starting ventures for EAV, and that newbies are often at a loss to figuring out what EA does apart from philosophize. This makes it hard to attract people who are practically competent, such as businesspeople and scientists, and overcome our philosopher-founder effect. From a standpoint of running useful projects, I think that what would be most useful would be business-plans and research agendas, followed by empirical investigations of issues, followed by theoretical prioritization, followed by philosophical investigations. However, it seems to me that most people are working in the latter categories.
For EAs who are actually acting, their actions would more easily be swayed by empirical research. Although most people working on high-impact areas were brought there by theoretical reasoning, their ongoing questions are more concrete. For example, in AI, I wonder: To what extent have concerns about edge-instantiation and incorrigibility borne out in actual AI systems? To what extent has AI progress been driven by new mathematical theory, rather than empirical results? What kind of CV do you need to have to participate in governing AI? What can we learn about this from the case of nuclear governance? This would help people to prioritize much more than, for example, philosophical arguments about the different reasons for working on AI as compared to immigration.
Empirical research is easier to build on.
One counterargument would be that perhaps these action-oriented EAs have too-short memories. Since their previous decisions relied on theory from people like Bostrom, shouldn’t we expect the same from their future decisions? There are two rebuttals to this. One is that theoretical investigations are especially dependent on the talent of their authors. I would not argue that people like Bostrom (if we know of any) should stop philosophizing about deeply theoretical issues, such as infinite ethics or decision theory. However, that research must be supported by many more empirically-minded investigators. Second, there are reasons to expect the usefulness of theoretical investigations to decrease relative to empirical research over time as the important insights are harvested, people start implementing plans, and plausible catastrophes get nearer.
Hey Ryan, I’d be particularly interested in hearing more about your reasons for your first point (about theoretical vs. empirical work).
Sure. Here are some reasons I think this:
Too few EAs are doing object-level work, (excluding donations), and this can be helped by doing empirical research around possible actions. One can note that there were not enough people interested in starting ventures for EAV, and that newbies are often at a loss to figuring out what EA does apart from philosophize. This makes it hard to attract people who are practically competent, such as businesspeople and scientists, and overcome our philosopher-founder effect. From a standpoint of running useful projects, I think that what would be most useful would be business-plans and research agendas, followed by empirical investigations of issues, followed by theoretical prioritization, followed by philosophical investigations. However, it seems to me that most people are working in the latter categories.
For EAs who are actually acting, their actions would more easily be swayed by empirical research. Although most people working on high-impact areas were brought there by theoretical reasoning, their ongoing questions are more concrete. For example, in AI, I wonder: To what extent have concerns about edge-instantiation and incorrigibility borne out in actual AI systems? To what extent has AI progress been driven by new mathematical theory, rather than empirical results? What kind of CV do you need to have to participate in governing AI? What can we learn about this from the case of nuclear governance? This would help people to prioritize much more than, for example, philosophical arguments about the different reasons for working on AI as compared to immigration.
Empirical research is easier to build on.
One counterargument would be that perhaps these action-oriented EAs have too-short memories. Since their previous decisions relied on theory from people like Bostrom, shouldn’t we expect the same from their future decisions? There are two rebuttals to this. One is that theoretical investigations are especially dependent on the talent of their authors. I would not argue that people like Bostrom (if we know of any) should stop philosophizing about deeply theoretical issues, such as infinite ethics or decision theory. However, that research must be supported by many more empirically-minded investigators. Second, there are reasons to expect the usefulness of theoretical investigations to decrease relative to empirical research over time as the important insights are harvested, people start implementing plans, and plausible catastrophes get nearer.