The most important lesson I learned after ten years in EA

Epistemic status: a rambling, sometimes sentimental, sometimes ranting, sometimes informative, sometimes aesthetic autobiographical reflection on ten years trying to do the most good.

Reminder that you can listen to this post on your podcast player within 24 hours of it reaching 25 upvotes using the Nonlinear Library.

Ten years ago I first heard about effective altruism, and it was one of the best things that ever happened to me. I was immediately sold. In fact, at the time I had been trying to start a movement of “Scientific World-Changers”, so you can imagine my excitement when I discovered that there was already a movement and it had a much better acronym.

Since then it’s been a crazy decade. I’ve started four EA charities in all the major cause areas: global poverty, animal welfare, AI x-risk, and meta.

I’ve been so frugal that me and my partner only had one (always damp) towel between the two of us and thought that getting a bed frame for our mattress would cost too much dead baby currency. I’ve updated to being so time-frugal that I’m working on launching a fund to remove financial barriers to productivity for longtermists, such as providing therapy or dishwashers.

I’ve been on fire, working long hours trying to save the world. I’ve been burnt out, working long hours trying to set up systems so I don’t overdo it again.

And through it all, the EA movement, despite its many flaws, has been such a huge source of meaning, growth, and community in my life.

Which is why I wanted to do something on this special anniversary. I asked on Facebook and somebody suggested the most EA way to celebrate ever: writing a post on the EA Forum about my biggest lessons learned. Hence this post.

So, what have I learned over ten years of obsessively trying to figure out and implement the question: how do I do the most good?

There are a lot of contenders. Here are a few that first came to mind:

  • Fundraising is one of the highest leverage meta-skills you can develop. It makes it so that instead of trying to find a job that does the most good, you can just think of what does the most good, then fundraise for that particular thing.

  • Hiring is perhaps the second highest leverage meta-skill. Imagine you could have any skill in the world. This is what hiring allows you to do. You don’t have to learn how to code or develop decades of experience in a particular political arena. You can just hire people who already have those skills or experiences, and in fact, are probably better than you’d ever have gotten.

  • It’s OK to care about your happiness. And not just because it’ll help you have more impact (which it will). It’s OK to care because you care intrinsically about your happiness. H/​t to Spencer’s valuism for convincing me of this in the end. (Great resources on mental health and EA here and here)

  • The meta the bettah. While it’s important to avoid meta traps and not have insanely long theories of change, generally speaking, meta is where you can get the truly great hits. It’s where you can set up passive impact streams, apply small multipliers to large numbers, and all other sources of juicy utility. It also tends to leave you with skills that cross-apply to a wide range of cause areas, giving you sweet sweet option value.

Which brings me to the biggest lesson I’ve learned over all this time:

The biggest lesson I learned from my altruistic journey so far: widen your f***ing confidence intervals

You know nothing about doing good in the world.

Seriously.

To be an EA is to find out, again and again and again, that what you thought was the best thing to do was wrong. You think you know what’s highest impact and you’re almost certainly seriously mistaken.

Every single time I have been so damn certain that this was the time we’d finally found the thing that totally definitely helped in a large way.

Every single time I have gone on to be thwarted by:

  • Crucial considerations

  • Evidence

  • Social influence and psychological barriers

Crucial considerations


God, I hate crucial considerations sometimes!

There’s nothing quite like discovering this argument that totally destroys the expected value of the last two years of work. It’s even worse when you find out that there’s a good chance you might have made things worse!

There’s flow-through effects, speciesism, population ethics, meta-ethics, and don’t get me started about counterfactuals.

Damn counterfactuals to all hell.

They complicate everything and yet you can’t ignore them!

In a way, this shouldn’t be surprising. This is probably one of the most complicated problems to dedicate your life to. At least with rocket science you find out if you’ve fired the rocket or not. With trying to live ethically, you never know for sure if you did it right. Even if you get all of the factual elements correct and execute flawlessly, you could still be pursuing the wrong goals since ethics is a deeply unsolved field.

Evidence


Orginal comic here

And yet, in spite of the fact that your endeavor is founded upon a fundamentally unsolved problem, there’s still plenty of scope for you to get the facts wrong. I spent eight of my ten EA years in neartermist causes where we have these pesky feedback loops that ruin everything all the time.

Great for learning!

Mostly learning that you know nothing, but still.

It was quite the experience. There’s nothing like really engaging yourself in the scientific literature of a field to really make you realize how little we understand about anything at all, and global health studies are no exception. I can’t remember the exact details, but I remember Eva Vivalt doing some amazing meta-science in the field to see how often studies replicated. The results were dismal. Even if you ran the exact same study in the same country but a year later, odds were high you’d get a different result. Read her paper or listen to her podcast episode if you want to experience some epistemic angst.

The scientific method shows us that even when we have all the best arguments we can make from our armchairs, when the rubber hits the road and we can test our hypotheses, we as humans are wrong most of the time. Even if we got all the crucial considerations correct and had our ethics in order, we’d probably find out that something hadn’t worked.

And we don’t even know why.

Science shows us that the world is complicated and we are just monkeys.

Monkeys in shoes. Monkeys who’ve gone to the moon. But monkeys nonetheless.

Social influence and psychological barriers

Another side effect of being a monkey is that other monkeys have an undue influence on us. It’s very easy to think that we’ve come to a conclusion because of rationally considering all the arguments.

And maybe you did indeed come to those conclusions via some very good reasoning.

But it could also be that you haven’t considered some other arguments because if you did then some of your monkey friends would give you funny looks. Or maybe you wouldn’t hang out with them much anymore, because if you really took that argument seriously, you wouldn’t have overlapping interests anymore.

The most common reason, though, that being a social monkey can lead to missing ideas is that one of the best ways to change your mind is to talk to a monkey who disagrees with you. If you read a book that challenges your beliefs, it’s easy to think “Ah, but what about Counterpoint B! This is why the whole idea is wrong”, then proceed to read something by a more reasonable author.

It could be, though, that they covered Counterpoint B in chapter twelve. Of course, it’s not really possible to know that without reading the whole thing, which is not a reasonable strategy. There are too many ideas in the world to give them all your full attention.

This is where talking to a monkey with differing viewpoints can help. If you’re discussing the idea with them and you say Counterpoint B, they can immediately let you know about Counter-counterpoint B. They can force you to look at it, even if your monkey-brain really really doesn’t want to look at it. Cause goddammit, you’ve changed your mind a million times already. Can’t you just be left in peace with your current comfortable views?

This is why so many people opt to only hang out with monkeys who they agree with, or to not “stir the pot” and bring up those pesky disagreements.

This is all well and good if you’re trying to lead a simple happy life. However, if you’re trying to lead a happy and impactful life, you’ll want to maintain friendships with people of diverse viewpoints. You’ll want to purposefully talk with people who disagree with you. Talk to EAs from different cause areas. Talk to people who aren’t EAs at all!

Do not create a sheltered group of friends who all agree, but debate ideas endlessly. Become good at disagreeing, such that it doesn’t feel like a debate, but a joyous exploration of ideas amongst fellow idea-lovers.

Because, if there’s anything that I have most learned in all my years of EA, it’s that we’re almost certainly wrong about what is best. We’re wrong about what is good. We’re wrong about how to go about getting what is good.

And that’s the best part about EA. It’s not an ideology but a question. It’s not a sprint, and it’s not a marathon—it’s a maze. It’s a journey where I don’t quite know the destination, but I do know that it’s the most meaningful one I could ever pursue, and I’m excited to see where the next ten years take me.