While I see some value in detailing commonly-held positions like this post does, and I think this post is well-written, I want to flag my concern that it seems like a great example of a lot of effort going into creating content that nobody really disagrees with. This sort of armchair qualified writing doesn’t seem to me like a very cost-effective use of EA resources, and I worry we do a lot of it, partly because it’s easy to do and gets a lot of positive social reinforcement, to a much greater degree than empirical bold writing tends to get.
I think that the value of this type of work comes from:
(i) making it easier for people entering the community to come up to the frontier of thought on different issues;
(ii) building solid foundations for our positions, which makes it easier to go take large steps in subsequent work.
For what it’s worth, I do agree that’s where most of the value comes from, though I think the value is much lower than the value of similar empirical/bold writing, at least for this example.
While enough people are skeptical about rapid growth and no one (I think) wants so sacrifice integrity, the warning to be careful about politicization of EA is a timely and controversial one because well-known EAs have put a lot of might behind Hillary’s election campaign and the prevention of Brexit to the point that the lines behind private efforts and EA efforts may blur.
I doubly agree here. The title “Hard-to-reverse decisions destroy option value” is hard to disagree with because it is pretty tautological.
Over the last couple of years, I’ve found it to be a widely held view among researchers interested in the long-run future that the EA movement should on the margin be doing less philosophical analysis. It seems to me that it would be beneficial for more work to be done on the margin on i) writing proposals for concrete projects, ii) reviewing empirical literature, and iii) analyzing technological capabilities and fundamental limitations, and less philosophical analysis.
Philosophical analysis such as in much of EA Concepts and these characterizations of how to think about counterfactuals and optionality are less useful than (i-iii) because they do not very strongly change how we will try to affect the world. Suppose I want to write some EA project proposals. In such cases, I am generally not very interested in citing these generalist philosophical pieces. Rather, I usually want to build from a concrete scientific/empirical understanding of related domains and similar past projects. Moreover, I think “customers” like me who are trying to propose concrete work are usually not asking for these kinds of philosophical analysis and are more interested in (i-iii).
Over the last couple of years, I’ve found it to be a widely held view among researchers interested in the long-run future that the EA movement should on the margin be doing less philosophical analysis.
I agree with some versions of this view. For what it’s worth I think there may be a selection effect in terms of the people you’re talking to, though (perhaps in terms of the organisations they’ve chosen to work with): I don’t think there’s anything like consensus about this among the researchers I’ve talked to.
For an example of this view, see Nick Beckstead’s research advice from back in 2014:
I think most highly abstract philosophical research is unlikely to justify making different decisions. For example, I am skeptical of the “EA upside” of most philosophical work on decision theory, anthropics, normative ethics, disagreement, epistemology, the Fermi paradox, and animal consciousness—despite the fact that I’ve done a decent amount of work in the first few categories. If someone was going to do work in these areas, I’d probably be most interested in seeing a very thorough review of the Fermi Paradox, and second most interested in a detailed critique of arguments for the overwhelming importance of the very long-term future.
I’m also skeptical of developing frameworks for making comparisons across causes right now. Rather than, e.g., trying to come up with some way of trying to trade off IQ increases per person with GDP per capita increases, I would favor learning more about how we could increase IQ and how we could increase GDP per capita. There are some exceptions to this; e.g., I see how someone could make a detailed argument that, from a long-run perspective, human interests are much more instrumentally important than animal interests. But, for the most part, I think it makes more sense to get information about promising causes now, and do this kind of analysis later. Likewise, rather than developing frameworks for choosing between career areas, I’d like to see people just gather information about career paths that look particularly promising at the moment.
Other things being equal, I strongly prefer research that involves less guesswork. This is less because I’m on board with the stuff Holden Karnofsky has said about expected value calculations—though I agree with much of it—and more because I believe we’re in the early days of effective altruism research, and most of our work will be valuable in service of future work. It is therefore important that we do our research in a way that makes it possible for others to build on it later. So far, my experience has been that it’s really hard to build on guesswork. I have much less objection to analysis that involves guesswork if I can be confident that the parts of the analysis that involve guesswork factor in the opinions of the people who are most likely to be informed on the issues.
I suspect that the distinctions here are actually less bright than “philosophical analysis” and “concrete research”. I can think of theoretical work that is consistent with doing what you call (i) - (iii) and does not involve a lot of guesswork. After all, lot of theoretical work is empirically informed, even if it’s not itself intended to gather new data. And a lot of this theoretical work is quite decision relevant. A simple example is effective altruism itself: early work in EA was empirically informed theoretical work. Another example that’s close to my heart is value of information work. There are existing problems in how to identify high and low value of information, when to explore vs. exploit, and so on. I suspect that doing empirically informed theoretical work on these question would be more fruitful than trying to solve them through empirical means only. So my inclination is to take this on a case to case basis. We see radical leaps forward sometimes being generated by theoretical work and sometimes being generated by novel empirical discoveries. It seems odd to not draw from two highly successful methods.
What, then, about pure a priori work like mathematics and conceptual work? I think I agree with Owen that this kind of work is important for building solid foundations. But I’d also go further in saying that if you find good, novel foundational work to do, then it can often bear fruit later. E.g. work in economics and game theory is of this sort, and yet I think that a lot of concepts from game theory are very useful for analyzing real world situations. It would have been a shame if this work had been dismissed early on as not decision relevant.
I suspect that the distinctions here are actually less bright than “philosophical analysis” and “concrete research”. I can think of theoretical work that is consistent with doing what you call (i) - (iii) and does not involve a lot of guesswork. After all, lot of theoretical work is empirically informed, even if it’s not itself intended to gather new data. And a lot of this theoretical work is quite decision relevant. A simple example is effective altruism itself: early work in EA was empirically informed theoretical work… suspect that doing empirically informed theoretical work on these question would be more fruitful than trying to solve them through empirical means only… So my inclination is to take this on a case to case basis… What, then, about pure a priori work like mathematics and conceptual work?
I don’t think I’m arguing what you think I’m arguing. To be clear, I wouldn’t claim a bright dividing line, nor would I claim that more philosophical work, or pure mathematics has no use at all. Now would I claim that we should avoid theory altogether. I agree that there are cases of theoretical work that could be useful. For examples, there is AI safety, and there may be some important crossover work to be done in ethics and in understanding human experience and human values. But that doesn’t mean we just need to throw up our arms and say that everything needs to be taken on a case by case bases, if in-fact we have good reasons to say we’re overall overinvesting in one kind of research rather than another. The aim has to be to do some overall prioritization.
Another example that’s close to my heart is value of information work. There are existing problems in how to identify high and low value of information, when to explore vs. exploit, and so on… If you find good, novel foundational work to do, then it can often bear fruit later. E.g. work in economics and game theory is of this sort, and yet I think that a lot of concepts from game theory are very useful for analyzing real world situations. It would have been a shame if this work had been dismissed early on as not decision relevant.
I agree that thinking about exploration vs exploration tradeoffs is both interesting and useful. However, the Gittins Index was discovered in 1979. Much of the payoff of this discovery came decades afterward. We have good reasons to have pretty high discount rates, such as i) returns on shaping research communities that are growing at high double-digit percentages, ii) double digit chances of human-level AI in next 15 years.
There’s very little empirical research going into important concrete issues such as how to stage useful policy interventions for risky emerging technologies (Allan Dafoe, Mathias Mass notwithstanding), how to build better consensus among decision-makers, how to get people to start more good projects, how to better recruit, etc that many important decisions of EAs will depend on. It’s tempting to say that many EAs have wholly forgotten what ambitious business plans and literature reviews on future-facing technologies are even supposed to look like! I would love to write that off as hyperbole but I haven’t seen any recent examples. And it seems critical that theory should be feeding into such a process.
I’d be interested to know if people have counterconsiderations on the level of what should be a higher priority.
There are two different claims here: one is “type x research is not very useful” and the other is “we should be doing more type y research at the margin”. In the comment above, you seem to be defending the latter, but your earlier comments support the former. I don’t think we necessarily disagree on the latter claim (perhaps on how to divide x from y, and the optimal proportion of x and y, but not on the core claim). But note that the second claim is somewhat tangential to the original post. If type x research is valuable, then even though we might want more type y research at the margin, this isn’t a consideration against a particular instance of type x research. Of course, if type x research is (in general or in this instance) not very useful, then this is of direct relevance to a post that is an instance of type x research. It seems important not to conflate these, or to move from a defense of the former to a defense of the latter. Above, you acknowledge that type x research can be valuable, so you don’t hold the general claim that type x research isn’t useful. I think you do hold the view that either this particular instance of research or this subclass of type x research is not useful. I think that’s fine, but I think it’s important not to frame this as merely a disagreement about what kinds of research should be done at the margin, since this is not the source of the disagreement.
Of course, if type x research is (in general or in this instance) not very useful, then this is of direct relevance to a post that is an instance of type x research. It seems important not to conflate these, or to move from a defense of the former to a defense of the latter.
You’re imposing on my argument a structure that it didn’t have. My argument is that prima facie, analysing the concepts of effectiveness is not the most useful work that is presently to be done. If you look at my original post, it’s clear that it had a parallel argument structure: i) this post seems mostly not new, and ii) posts of this kind are over-invested. It was well-hedged, and made lots of relative claims (“on the margin”, “I am generally not very interested” etc. so it’s really weird to be repeatedly told that I was arguing something else.
I think that’s fine, but I think it’s important not to frame this as merely a disagreement about what kinds of research should be done at the margin, since this is not the source of the disagreement.
The general disagreement about whether philosophical analysis is under-invested is source of about half of the disagreement. I’ve talked to Stefan and Ben, and I think that I was convinced that philosophical analysis was prima facie under-invested atm, then I would view analysis of principles of effectiveness a fair bit more favorably. I could imagine that if they became fully convinced that practical work was much more neglected then they might want to see more project proposals and literature reviews done too.
While I see some value in detailing commonly-held positions like this post does, and I think this post is well-written, I want to flag my concern that it seems like a great example of a lot of effort going into creating content that nobody really disagrees with. This sort of armchair qualified writing doesn’t seem to me like a very cost-effective use of EA resources, and I worry we do a lot of it, partly because it’s easy to do and gets a lot of positive social reinforcement, to a much greater degree than empirical bold writing tends to get.
I think that the value of this type of work comes from: (i) making it easier for people entering the community to come up to the frontier of thought on different issues; (ii) building solid foundations for our positions, which makes it easier to go take large steps in subsequent work.
Cf. Olah & Carter’s recent post on research debt.
For what it’s worth, I do agree that’s where most of the value comes from, though I think the value is much lower than the value of similar empirical/bold writing, at least for this example.
While enough people are skeptical about rapid growth and no one (I think) wants so sacrifice integrity, the warning to be careful about politicization of EA is a timely and controversial one because well-known EAs have put a lot of might behind Hillary’s election campaign and the prevention of Brexit to the point that the lines behind private efforts and EA efforts may blur.
I doubly agree here. The title “Hard-to-reverse decisions destroy option value” is hard to disagree with because it is pretty tautological.
Over the last couple of years, I’ve found it to be a widely held view among researchers interested in the long-run future that the EA movement should on the margin be doing less philosophical analysis. It seems to me that it would be beneficial for more work to be done on the margin on i) writing proposals for concrete projects, ii) reviewing empirical literature, and iii) analyzing technological capabilities and fundamental limitations, and less philosophical analysis.
Philosophical analysis such as in much of EA Concepts and these characterizations of how to think about counterfactuals and optionality are less useful than (i-iii) because they do not very strongly change how we will try to affect the world. Suppose I want to write some EA project proposals. In such cases, I am generally not very interested in citing these generalist philosophical pieces. Rather, I usually want to build from a concrete scientific/empirical understanding of related domains and similar past projects. Moreover, I think “customers” like me who are trying to propose concrete work are usually not asking for these kinds of philosophical analysis and are more interested in (i-iii).
I agree with some versions of this view. For what it’s worth I think there may be a selection effect in terms of the people you’re talking to, though (perhaps in terms of the organisations they’ve chosen to work with): I don’t think there’s anything like consensus about this among the researchers I’ve talked to.
For an example of this view, see Nick Beckstead’s research advice from back in 2014:
I suspect that the distinctions here are actually less bright than “philosophical analysis” and “concrete research”. I can think of theoretical work that is consistent with doing what you call (i) - (iii) and does not involve a lot of guesswork. After all, lot of theoretical work is empirically informed, even if it’s not itself intended to gather new data. And a lot of this theoretical work is quite decision relevant. A simple example is effective altruism itself: early work in EA was empirically informed theoretical work. Another example that’s close to my heart is value of information work. There are existing problems in how to identify high and low value of information, when to explore vs. exploit, and so on. I suspect that doing empirically informed theoretical work on these question would be more fruitful than trying to solve them through empirical means only. So my inclination is to take this on a case to case basis. We see radical leaps forward sometimes being generated by theoretical work and sometimes being generated by novel empirical discoveries. It seems odd to not draw from two highly successful methods.
What, then, about pure a priori work like mathematics and conceptual work? I think I agree with Owen that this kind of work is important for building solid foundations. But I’d also go further in saying that if you find good, novel foundational work to do, then it can often bear fruit later. E.g. work in economics and game theory is of this sort, and yet I think that a lot of concepts from game theory are very useful for analyzing real world situations. It would have been a shame if this work had been dismissed early on as not decision relevant.
I don’t think I’m arguing what you think I’m arguing. To be clear, I wouldn’t claim a bright dividing line, nor would I claim that more philosophical work, or pure mathematics has no use at all. Now would I claim that we should avoid theory altogether. I agree that there are cases of theoretical work that could be useful. For examples, there is AI safety, and there may be some important crossover work to be done in ethics and in understanding human experience and human values. But that doesn’t mean we just need to throw up our arms and say that everything needs to be taken on a case by case bases, if in-fact we have good reasons to say we’re overall overinvesting in one kind of research rather than another. The aim has to be to do some overall prioritization.
I agree that thinking about exploration vs exploration tradeoffs is both interesting and useful. However, the Gittins Index was discovered in 1979. Much of the payoff of this discovery came decades afterward. We have good reasons to have pretty high discount rates, such as i) returns on shaping research communities that are growing at high double-digit percentages, ii) double digit chances of human-level AI in next 15 years.
There’s very little empirical research going into important concrete issues such as how to stage useful policy interventions for risky emerging technologies (Allan Dafoe, Mathias Mass notwithstanding), how to build better consensus among decision-makers, how to get people to start more good projects, how to better recruit, etc that many important decisions of EAs will depend on. It’s tempting to say that many EAs have wholly forgotten what ambitious business plans and literature reviews on future-facing technologies are even supposed to look like! I would love to write that off as hyperbole but I haven’t seen any recent examples. And it seems critical that theory should be feeding into such a process.
I’d be interested to know if people have counterconsiderations on the level of what should be a higher priority.
There are two different claims here: one is “type x research is not very useful” and the other is “we should be doing more type y research at the margin”. In the comment above, you seem to be defending the latter, but your earlier comments support the former. I don’t think we necessarily disagree on the latter claim (perhaps on how to divide x from y, and the optimal proportion of x and y, but not on the core claim). But note that the second claim is somewhat tangential to the original post. If type x research is valuable, then even though we might want more type y research at the margin, this isn’t a consideration against a particular instance of type x research. Of course, if type x research is (in general or in this instance) not very useful, then this is of direct relevance to a post that is an instance of type x research. It seems important not to conflate these, or to move from a defense of the former to a defense of the latter. Above, you acknowledge that type x research can be valuable, so you don’t hold the general claim that type x research isn’t useful. I think you do hold the view that either this particular instance of research or this subclass of type x research is not useful. I think that’s fine, but I think it’s important not to frame this as merely a disagreement about what kinds of research should be done at the margin, since this is not the source of the disagreement.
You’re imposing on my argument a structure that it didn’t have. My argument is that prima facie, analysing the concepts of effectiveness is not the most useful work that is presently to be done. If you look at my original post, it’s clear that it had a parallel argument structure: i) this post seems mostly not new, and ii) posts of this kind are over-invested. It was well-hedged, and made lots of relative claims (“on the margin”, “I am generally not very interested” etc. so it’s really weird to be repeatedly told that I was arguing something else.
The general disagreement about whether philosophical analysis is under-invested is source of about half of the disagreement. I’ve talked to Stefan and Ben, and I think that I was convinced that philosophical analysis was prima facie under-invested atm, then I would view analysis of principles of effectiveness a fair bit more favorably. I could imagine that if they became fully convinced that practical work was much more neglected then they might want to see more project proposals and literature reviews done too.