Helping newcomers be more objective with career choice

(co-authored by Daniel Wang)

Epistemic status: pretty sure about the problem wasting a lot of people’s time (and turning some people away), uncertain about ways to fix it.

Summary

  • It seems that people often waste time when discovering EA by being too attached to career plans they had before finding EA.

  • We could try to make this less common by:

    • having facilitators in intro fellowships share their own motivated reasoning upon discovering EA,

    • adding readings about changes in career plan (for instance, the part in Strangers Drowning that talks about Julia Wise or the piece On Saving the World, but the latter doesn’t seem newcomer-friendly).

The problem

We’ve noticed that some people (including ourselves) have a lot of emotional and aesthetic attachment to career ideas, which can bias them when making career choices (overshooting the adjustment that would happen when taking personal fit into account).

(The following two paragraphs are from Nikola’s perspective)

For instance, I know one person who, in middle school, decided that the best way to improve the world is to be a scientific researcher and to donate your leftover income to scientific research. Upon going through a The Precipice reading group, this person’s assessment of the best way to improve the world had not budged a bit. They locked in their answer about their career plans in middle school. This person has since leaving the reading group not engaged with EA as far as I know.

I personally was very attached to the idea of creating space habitats to reduce x-risk by having a “Plan B” in case something goes wrong on the Earth. I came to the conclusion that this is the best way to improve the world after maybe hours of non-careful consideration, and locked in my answer in middle school. When I discovered EA, it took me months to look at my options more objectively and see that space colonization is not even near the top of the list. I think seeing the part in Strangers Drowning that talks about Julia Wise changing career plans might have helped. Maybe if I had known to look out for motivated reasoning in choosing a career plan, I could have saved a lot of time.

Possible fixes

We’re pointing to the fact that priming newcomers to look out for motivated reasoning in their own career planning could save them a lot of time and energy, and maybe prevent some people from leaving EA. We’re not certain how to do this effectively though, as there could be some friction in telling people that they could be wrong about something that they’re very emotionally invested in.

One way to avoid hurting people’s feelings could be to point the barrel at yourself (if applicable) and point out ways your own reasoning was motivated about career plans. This could be very useful for intro fellowship facilitators, who could mention this at the beginning of the fellowship and come back to it from time to time. It would also humanize the facilitator and get across the message that making mistakes is okay and normal, but fixing them is important.

Another way could be to point newcomers to other EAs who have had motivated reasoning and/​or big changes in career plans, like Julia Wise or Nate Soares. Probably having some examples of people in more STEM-y subjects would be good (the two people that are mentioned planned to work in social work and politics/​economics respectively), and Nikola will probably write a post about their own change in career plan in an attempt to meet some of this need. Also, the post by Nate Soares doesn’t seem very newcomer-friendly in its vocabulary, so maybe we need more newcomer-friendly writing about career plan change in general.

Another angle is to look at the probabilities: just what are the odds that, upon some independent consideration and research, someone would find the absolute best way to do good? Of the people who have invested a similar amount of consideration and research, how many are right?

Another way could be to point to scientific studies or rationalist writing about motivated reasoning, but we’re not certain how helpful this would be, and we think most people who would get the point of the readings would also get the implication that their own reasoning is motivated and might be insulted.

So, to community builders, it might be helpful to subtly warn newcomers of being too attached to ways to impact the world, and keep in mind that finding out that there are better ways to improve the world than you previously thought is a good thing.