Interesting vs. Important Work—A Place EA is Prioritizing Poorly

There are many important issues in the world, and many interesting topics. But these are not the same thing, and we should beware of suspicious convergence. Given that, our assumption should be that the most interesting topics we hear about are far less important than the attention they receive. Heeding Scott Alexander’s recent warning, I’ll therefore ask much more specifically, what are the most intellectually interesting topics in Effective Altruism, and then I’ll suggest that we should be doing less work on them—and list a few concrete suggestions for how to do that.

What are the interesting things?

Here are some of my concrete candidates for most interesting work: infinite ethics, theoretical AI safety, rationality techniques, and writing high-level critiques of EA[1]. And to be clear, all of these ARE important. But the number of people we need working on them should probably be more limited than the current trajectory, and we should probably de-emphasize status for the most theoretical work[2].

To be clear, I love GPI, FHI, CSER, MIRI, and many other orgs doing this work. The people I know at each org are great, and I think that many of the things they do are, in fact, really important. And I like the work they do—not only do I think it’s important, it’s also SUPER interesting, especially to people who like philosophy, math, and/​or economics. But the convergence between important and interesting is exactly the problem I’m pointing towards.

Motivating Theoretical Model

Duncan Sabien talks about Monks of Magnitude, where different people work on things that have different feedback loop lengths, from 1 day, to 10-days, to people who spend 10,000 days thinking. He more recently mentioned that he noticed “people continuously vanishing higher into the tower,” that is, focusing on more abstract and harder to evaluate issues, and that very few people have done the opposite. One commenter, Ben Weinstein-Raun, suggested several reasons, among them that longer-loop work is more visible, and higher status. I think this critique fits the same model, where we should be suspicious that such long-loop work is over-produced. (Another important issue is that “it’s easier to pass yourself off as a long-looper when you’re really doing nothing,” but that’s a different discussion.)

The natural tendency to do work that is more conceptual and/​or harder to pin to a concrete measurable outcome is one we should fight back against, since by default it is overproduced. The basic reason it is overproduced is because people who are even slightly affected by status or interesting research, i.e. everyone, will give it at least slightly more attention than warranted, and further, because others are already focused on it, the marginal value is lower.

This is not to say that the optimal amount of fun and interesting research is zero, nor that all fun and interesting work is unimportant. We do need 10,000 day monks—and lots of interesting questions exist for long-termism that make them significant moral priorities. And I agree with the argument for a form of long-termism. But this isn’t a contradiction—and work on long-termism can be concrete and visible, isn’t necessarily conceptual, and doesn’t necessarily involve slow feedback loops.

Towards fixing the problem[3]

Effective Altruism needs to be effective, and that means we need evaluable outputs wherever possible.

First, anyone and everyone attempting to be impactful needs a theory of change[4], and an output that has some way of impacting the world. That means everyone, especially academics and researchers, should make this model clear, at least to themselves, but ideally also to others.

If you’re writing forum posts, or publishing papers, you need to figure out who is reading them, and whether your input will be helpful to them[5]. (Not whether it will be upvoted, or cited, because proxies are imperfect—but whether it will actually have an impact.) You should then look back and evaluate the work—did you accomplish something, or is it on the way to happening?

Lastly, I think that we as a community need to put status in better places. Philosophy and economics are fun, but there’s a reason they are seen as impractical, and belonging to the ivory-tower. Ceteris Paribus, academic work should get less status than concrete projects. And anyone on that path who is self-justifying it as their most impactful option needs to have a pretty strong story about impact to justify it[6], and a really good reason to say they wouldn’t be more impactful elsewhere.

Thanks to Abi Olvera and Abie Rohrig for feedback

  1. ^

    Yes, I see the irony of suggesting less conceptual critical work—this is procrastivity on my part.

    Also, feel free to suggest additional areas in the comments.

  2. ^

    To be clear, I’m guilty of some of this—I worked on a really fun paper with Anders Sandberg on the limits to value in a finite universe. During the same time period, I worked on introducing state legislation on lab accidents. And I will freely admit which one was more fun, and which was more impactful, and I think readers can guess which is which.

    Also, in general, altruists need to push back against the exact opposite problem, where people do concrete things that have obvious immediate impact, instead of trying to optimize at all, either just to have done something, or to feel good about their work. However, I think the “purchase your fuzzies separately” criticism is already very clearly integrated into effective altruism, and if anything, the pendulum has swung too far away from that extreme.

  3. ^

    This won’t fix the problem—that takes actual concrete work, not critical forum posts. (In general, work which is useful doesn’t get titled “Towards X”—but I already had a theory section, and solving problems is hard, while writing papers, or blog posts, is much easier, more intellectually stimulating, and more fun. Which is exactly the point.)

  4. ^

    As an aside, focusing on areas that others are excited about as impactful, without a clear inside view about the topic, is a really bad heuristic. It means you bought the kool-aid, but likely aren’t capable of focusing on things that are neglected within the domain, because you don’t have your own model. That means your work might have an impact, but to make that happen, you’ll need to be incredibly careful and deferential to others—which is absolutely horrible for epistemic health.

    Concretely, this suggests a few strategies. Don’t defer wholesale, and instead, give yourself slack and explore. Counter-intuitively, don’t aim for impact first. Working on community priorities before understanding them should be discouraged in favor of moving more slowly and building inside views—even if it takes quite a long time, and achieving clarity about why you disagree with others. (And if and where you agree with the consensus, or even some specific group’s position, again, beware of suspicious convergence between your views and the implicit social pressure to agree.)

  5. ^

    My hope for this post is that people who read the EA forum personally update towards more value for concrete work, and both mentally and interpersonally assign lower status to people who work at the coolest EA orgs.

    Note to those people and orgs: I love you all, please don’t hate me. But the suggestions also apply to you. On the other hand, I think that much of this is a community dynamics issue rather than something the organizations themselves are doing wrong, so further explicitly signaling the value of concrete work, to push back against the tendency to emphasize the importance of your work, would be great.

  6. ^

    Concretely, ask other people in EA who are not in academia about whether they buy your argument for impact, or if they think you should be doing something different. To minimize distortion due to interpersonal dynamics, you probably want to ask a group of people to all give you feedback, probably via a google doc, with the option to do so anonymously.