Large epistemological concerns I should maybe have about EA a priori

Summary

I have become more truthseeking and epistemically modest in recent months and feel I have to re-evaluate my ‘EA-flavored’ beliefs, including:

  1. My particular takes about particular cause areas (chiefly alignment). Often, these feel immodest and/​or copied from specific high-status people.

  2. Trust in the “EA viewpoint” on empirical issues (e.g., on AI risk). People tend to believe in stories about things that are too big for them to understand and I don’t know if EA is just one plausible story out of these.

  3. Are these large empirical questions too hard for us to make reasonable guesses? Are we deluding ourselves in thinking we are better than most other ideologies that have been mostly wrong throughout history?

  4. Can I assume ‘EA-flavored’ takes on moral philosophy, such as utilitarianism-flavored stuff, or should I be more ‘morally centrist’?

  5. Can I, as a smart and truthseeking person, do better than just deferring on, say, “Might AI lead to extinction?” Even though there are smarter & more epistemically virtuous people who I could defer to?

  6. Should I hold very moderate views on everything?

  7. Can EA, as a “smart and truthseeking” movement, assume its opinions are more accurate than other expert groups’?

Note

I originally wrote this as a private doc, but I thought maybe it’s valuable to publish. I’ve only minimally edited it.

Also, I now think the epistemological concerns listed below aren’t super clearly carved and have a lot of overlap. The list was never meant to be a perfect carving, just to motion at the shape of my overall concerns, but even so, I’d write it differently if I was writing it today.

Motivation

For some time now, I’ve wanted nothing more than to finish university and just work on EA projects I love. I’m about to finish my third year of university and could do just that. A likely thing I would work on is alignment field-building, e.g., helping to run the SERI MATS program again. (In this doc, will use alignment field-building as the representative of all the community building/​operations-y projects I’d like to work on, for simplicity.)

However, in recent months, I have become more careful about how I form opinions. I am more truthseeking and more epistemically modest (but also more hopeful that I can do more than blind deferral in complex domains). I now no longer endorse the epistemics (used here broadly as “ways of forming beliefs”) that led me to alignment field-building in the first place. For example, I think this in part looked like “chasing cool, weird ideas that feel right to me” and “believing whatever high-status EAs believe”.

I am now deeply unsure about many assumptions underpinning the plan to do alignment field-building. I think I need to take some months to re-evaluate these assumptions.

In particular, here are the questions I feel I need to re-evaluate:

1. What should my particular takes about particular cause areas (chiefly alignment) and about community building be?

My current takes often feel immodest and/​or copied from specific high-status people. For example, my takes on which alignment agendas are good are entirely copied from a specific Berkeley bubble. My takes on the size of the “community building multiplier” are largely based on quite immodest personal calculations, disregarding that many “experts” think the multiplier is lower.

I don’t know what the right amount of immodesty and copying from high-status people is, but I’d like to at least try to get closer.

2. Is the “EA viewpoint” on empirical issues (e.g., on AI risk) correct (because we are so smart)?

Up until recently I just assumed (a part of) EA is right about large empirical questions like “How effectively-altruistic is ‘Systemic Change’?”, “How high are x-risks?” and “Is AI an x-risk?”. (“Empirical” as opposed to “moral”.) First, this was maybe a naïve kind of tribalistic support, later because of the “superior epistemics” of EAs. The poster version of this is “Just believe whatever Open Phil says”.

Here’s my concern: In general, people adopt stories they like on big questions, e.g., the capitalism-is-cancer-and-we-need-to-overhaul-the-system story or the AI-will-change-everything-tech-utopia story. People don’t seek out all the cruxy information and form credences to actually get closer to the truth. I used to be fine just to back “a plausible story of how things are”, as I suspect many EAs are. But now I want to back the correct story of how things are.

I’m wondering if the EA/​Open Phil worldview is just a plausible story. This story probably contains a lot of truthseeking and truth on lower-level questions, such as “How effective is deworming?”. But on high-level questions such as “How big a deal is AGI?”, maybe it is close to impossible not just to believe in a story and instead do the hard truthseeking thing. Maybe that would be holding EA/​Open Phil to an impossible standard. I simply don’t know currently if EA/​Open Phil epistemics are better than that and therefore I should not defer to them unreservedly.

I am even more worried about this in the context of bubbles of EAs in the bay area. I’ve perceived the desire of people there to buy into big/​exciting stories to be quite strong.

Maybe the EA/​Open Phil story is therefore roughly as likely to be true as the stories of other smart communities, e.g. other ML experts. This seems especially damning for parts of the story where (a part of) EA is taking a minority view. A modest person would assume that EA is wrong in such cases.

I haven’t really heard many convincing stories that run counter EA, but I also haven’t really tried. Sure, I have heard counter-arguments repeated by EAs, but I’ve never sought out disagreeing communities and heard them out on their own terms. An additional problem is that others probably don’t spend as much time on refuting EA as EAs spend on backing it with arguments.

For illustration, I recently read an article criticizing the way EA often deals with low-probability risks. The author claims their critical view is common in the relevant academia. I wouldn’t even be surprised if this was the case. This makes my blind assumption that x-risk is a core moral priority of our time seem unjustified. I haven’t even considered plausible alternative views! (I don’t remember the name of the article sadly.)

3. Are humans in general equipped to answer such huge empirical questions?

Maybe questions such as “How high are x-risks?” are just so hard that even our best guesses are maybe 5% likely to be right. We delude ourselves to think we understand things and build solid theories, but really we are just like all the other big ideologies that have come and gone in history. We’re like communists who believe they have found the ultimate form of society, or like hippies who believe they figured out the answer to everything is love. (Or like 30s eugenicists who believe they should “improve” humanity.)

Here are a bunch of related concerning considerations:

  • Maybe there is something very big and real about the term “groupthink” that we don’t understand.

  • Maybe our epistemics don’t weigh that heavily in all of this, maybe they increase our chances of being correct from 5% to 6%.

  • Maybe the main ingredient to finding answers to the big questions throughout history has just been guessing a lot. Not being smart and truthseeking.

  • Maybe there’s a huge illusion in EA of “someone else has probably worked out these big assumptions we are making”. This goes all the way up to the person at Open Phil thinking “Holden has probably worked these out” but actually no one has.

  • Maybe it’s really hard to notice for people when they are not smart enough to have accurate views on something.

  • Maybe the base rate of correct big ideas is just very low.

I don’t understand any of these dynamics well and I don’t know if EA could be falling prey to them currently. Since these seem plausibly big, they seem worth investigating.

Again, I am even more worried about this in the context of EA bubbles in the Bay Area.

4. “EA-flavored” moral philosophy

I’ve been assuming a lot of “EA-flavored” takes on moral philosophy. This includes utilitarianism-flavored stuff, de-emphasizing rules/​duties/​justice/​other moral goods, and totalist population ethics. Some of them are minority views, including among very smart subject experts. I am considering whether I should be more “morally centrist”. Depending on my answer to this question, this might imply anything from spending a bit more time with my family to changing my work focus to something “robustly good” like clean tech.

Interlude—information value

Now at the latest, I’m expecting a reaction like “But how on earth are you going to make progress on questions like fundamental moral philosophy?”.

First, note that I do not need to make progress on the fundamental philosophical/​scientific issues as long as I can make progress on my epistemic strategies about them. E.g., I don’t need to decisively prove utilitarianism is correct if I can just decide my epistemic strategy should be a bit more immodest and therefore I believe in utilitarianism somewhat immodestly.

Practically, looking at my list of assumptions to re-evaluate, I feel like I could easily change my mind on each of them in a matter of weeks or months. The main problem is that I haven’t had the time to even look at them superficially and read 1 article or talk to 1 person about each of them. I think the information value in some deliberation is quite high and justifies investing some time. And better to do so sooner than later.

An objection to this may be: “Sure, your beliefs might change. But how can you expect your beliefs post-deliberation to be more accurate than pre-deliberation? Philosophers have been split over them for millennia and that doesn’t change, no matter what you deliberate.”

I would respond that this is precisely the claim of the concept called “epistemics”: that some ways of forming beliefs produce more accuracy than others. E.g., modest epistemics might produce more accurate beliefs than immodest epistemics. So if I have some credence that epistemics are powerful enough to do this in a domain like moral philosophy, then I’m justified to think my accuracy might increase post-deliberation. (And I do have some credence in that.)

Also, I’m expecting a gut reaction like “I’m skeptical that someone who just wants to have an impact in alignment field-building should end up having to do philosophy instead/​first.” I don’t know much to say in response except that my reasoning seems to straightforwardly imply this. I would still be interested in whether many people have this gut reaction, please feel free to leave a comment if you do!

Back to questions I need to re-evaluate

From the four assumptions listed above, it’s probably evident that this is going in a very “meta” epistemic direction. How much can I trust EA in forming my beliefs? How much can I trust myself, and in which situations?

Here are the more “meta” epistemics questions to re-evaluate:

5. Can I, as a smart and truthseeking person, do better than just deferring on complex empirical/​moral questions?

For example, can I do better than just deferring to the “largest and smartest” expert group on “Might AI lead to extinction?” (which seems to be EA). Can I instead look at the arguments and epistemics of EAs versus, say, opposing academics and reach a better conclusion? (Better in the sense of “more likely to be correct”.) If so, how much and how should I do that in the details?

Just in case you are thinking “clearly you can do better”: Consider the case of a smarter, more knowledgeable person with better epistemics than me. I know such a person, and they’ve even spent a lot more time thinking about “Might AI lead to extinction?” than me. They are probably also better than me at doing the whole weighing up different people’s views thing. From this angle, it seems unlikely that I can do better than just deferring to them. (To their view ‘all things considered’, not their view ‘by their own lights’.)

Just in case you are thinking “clearly you can’t do better”: This seems to contradict the way essentially everyone behaves in practice. I know no one who only ever defers to the “largest and smartest” expert group on everything, and doesn’t presume to look at arguments or at least the epistemics of different expert groups.

6. Should I be more normal?

If I tend to say I can’t do better than just deferring on complex empirical/​moral questions, should I hold very moderate views on everything? Should I be a third deontologist, third virtue ethicist, third consequentialist, so to speak? Should I believe climate change is the biggest issue of our time? Should I stop drinking meal shakes? (I’m being mostly serious.)

(This is similar to point 4.)

7. Can EA, as a “smart and truthseeking” movement, assume its opinions are more accurate than other expert groups’?

We seem to often hope this is the case. E.g., we hope we are right about AI being an existential risk based on how smart and truthseeking we are. (In another sense, of course, we hope we are wrong.)

More on information value

I want to reiterate here that, even though these questions seem daunting, I think I could learn something that changes my mind in a lasting way within weeks or months. For example, I could imagine finding out that almost no one supports epistemic modesty in its strongest form and becoming more immodest as a result. Or I could imagine finding out that influential EAs haven’t thought about modesty much and becoming more cautious about “EA beliefs” as a result. I think it therefore makes sense to think about this stuff, and do so now rather than later.

I am grateful to Isaac Dunn and Pedro Oliboni for helpful feedback on earlier versions of this post.