University groups as impact-driven truth-seeking teams

A rough untested idea that I’d like to hear others’ thoughts about. This is mostly meant as a broader group strategy framing but might also have interesting implications for what university group programming should look like.

EA university group organizers are often told to “backchain” our way to impact:

What’s the point of your university group?

“To create the most impact possible, to do the greatest good we can”

What do you need in order to create that?

“Motivated and competent people working on solving the world’s most pressing problems”

And as an university group, how do you make those people?

“Find altruistic people, share EA ideas with them, provide an environment where they can upskill”

What specific things can you do to do that?

“Intro Fellowships to introduce people to EA ideas, career planning and 1-1s for upskilling”

This sort of strategic thinking is useful at times, but I think that it can also be somewhat pernicious, especially when it naively justifies the status quo strategy over other possible strategies.[1] It might instead be better to consider a wide variety of framings and figure out which is best.[2] One strategy framing I want to propose that I would be interested in testing is viewing university groups as “impact driven truth-seeking teams.”

What this looks like

An impact-driven truth-seeking team is a group of students trying to figure out what they can do with their lives to have the most impact. Imagine a scrappy research team where everyone is trying to figure out the answer to this research question—”how can we do the most good?” Nobody has figured out the question yet, nobody is a purveyor of any sort of dogma, everyone is in it together to figure out how to make the world as good as possible with the limited resources we have.

What does this look like? I’m not all that sure, but it might have some of these elements:

  • An intro fellowship that serves an introduction to cause prioritization, philosophy, epistemics, etc.

  • Regular discussions or debates about contenders for “the most pressing problem of our time”

    • More of a focus on getting people to research and present arguments themselves than having conclusions presented to them to accept

  • Active cause prioritization

    • Live google docs with arguments for and against certain causes

    • Spreadsheets attempting to calculate possible QALYs saved, possible x-risk reduction, etc

    • Possibly (maybe) even trying to do novel research on open research questions

  • No doubt some of the elements we identified before in our backchaining are imporant too—the career planning and the upskilling

  • I’m sure there’s much more that could be done along these lines that I’m missing or that hasn’t been thought of yet at all

Another illustrative picture—imagine instead of university groups being marketing campaigns for Doing Good Better, we could each be a mini-80,000 hours research team,[3] trying to start at first principles and building our way up, assisted by the EA movement, but not constrained by it.

Cause prio for it’s own sake for the sake of EA

Currently, the modus operandi of EA university groups seems to be selling the EA movement to students by convincing them of arguments to prioritize the primary EA causes. It’s important to realize that the EA handbook serves as an introduction to the movement called Effective Altruism [4] and the various causes that it has already identified as being impactful, not as an introductory course in cause prioritization. It seems to me that this is the root of much of the unhealthy epistemics that can arise in university groups.[5]

I don’t think that students in my proposed team should stop engaging with the movement and its ideas. On the contrary, I think that more ideas about doing good better have come from this mileu than any other in history. I don’t think it’s a crime to defer at times. But before deferring, I think it’s important to realize that you’re deferring, and to make sure that you understand and trust who or what you’re deferring to (and perhaps to first have an independent impression). Many intro fellowship curricula (eg the EA handbook) come across more as manifestos than research agendas—and are often presented as an ideology, not as evidence that we can use to make progress on our research question.

I think it’s best that university groups unabashedly explore beyond the boundaries of the EA forum and consider a wide range of opinions. Some people might see it as too much of a risk to take that a few promising students who think deeply about the world’s most pressing problems could come out after having done their cause prioritization not aligned with the EA movement, choosing some “non-EA cause” or preferring not to affiliate with the EA brand. On first look, this seems to be a loss for EA, but healthier epistemics of people trying to solve the most pressing problems is a win for the greater good,[6] and if that’s what this movement cares about—it is a win too for the effective altruists.

Possible problems with this approach

  • Some people might not be as interested in joining an EA group that looks like what I’ve proposed above. A cause prioritization introductory course, for example, might require a good amount of effort and might put off students who aren’t interested in math/​econ/​philosophy. I’m not sure if this is a good fence or if we would be losing students who could contribute a lot of good to the world.

  • This proposal might just be untenable—it might be too much to try to get already busy students to try to become a “research team,” or maybe only very few students would be interested.

  • Maybe EA groups should be an intro to the EA movement, maybe the whole epistemics thing is overrated. It might be true that the world in which we make lots of EAs is better than the world in which we make lots of good cause prioritizers.

Thoughts and feedback are very appreciated!

  1. ^

    Some other unhealthy effects this (might) have:

    a) create an unhealthy mindset of “Theory of Change”-executors (the organizers) and “Theory of Change”-subjects (the group members).

    b) like I discuss next, ignore less obvious factors like epistemics—where does having good epistemics fit into this? It doesn’t, because the sort of epistemics we’re discussing is a property of groups more importantly than it is of people, so it doesn’t fit neatly into this chain-of-thought questioning (though I’m open to the idea that sufficiently good backchaining might solve this.)

  2. ^

    Is this naive, heretical frontchaining? I think that you can answer those questions above in a hundred different valid ways, leading to many different possible backchained strategies. The backchaining above might help with finding the key ingredients you need to secure to make an impact, but IMO it shouldn’t be your group strategy. Instead, group strategy should come from testing different hypotheses about how groups might best work. (We probably don’t know of and probably haven’t tried the optimal strategies!). In what follows, I propose one such hypothesis.

  3. ^

    This analogy doesn’t work perfectly. I chose 80k because they do work both on cause prioritization and also on testing fit/​career planning/​upskilling etc, which I might not be fully conveying by the title of this post. I don’t mean that we should just do cause prio research and never get around to doing anything. See more here.

  4. ^

    The presence of marketing material like this really makes this clear.

  5. ^

    I think plenty has already been said about this (to the point that I think it’s been overstated and overgeneralized) and I won’t comment too much on it.

  6. ^

    Why should we care about epistemics? I think that this is an important question to ask ourselves. If we (assuming a moral-realist sort of act utilitarianism) figure out what the most pressing problems are with absolute certainty, then maybe we should start prosletyzing by just telling people what the most important causes are and convicing them to do them. This seems especially true if we are perfectly rational reasoners and so are the people we’re trying to convince.

    The problem is, none of these assumptions are true. We’re just guessing at what the most pressing problems are (especially as undergraduate university group organizers!) and there are all sorts of other moral uncertainties in addition to the factual ones. I think that this should probably be more developed.