I attended EAG London, as well as four retreats for student group organizers (these were focused mostly on community building & how to make strong university groups).
In this post, I summarize some of my takeaways and lessons learned. I also offer a few suggestions, heuristics, and action items for those who want to put some of these ideas into practice.
Here are my takeaways:
Think Bigger
Take More [Calculated] Risks
Be More Entrepreneurial and Agentic (Do Things)
Apply for Funding
Value Your Time
Reach out to EAs
Takeaway #1: Think bigger
In the early 2010s, EA was an extremely young movement with some idealistic thinkers. In 2021, it is still a very young movement, and it still has idealistic thinkers, but it also has a lot more energy and funding.
We can think bigger, grander, and more ambitiously about how to improve the present
and the future. There is a particular interest inmegaprojects— highly impactful projects that could scale widely, even if they require large amounts of funding.
Okay, thinking bigger and grander sounds good in theory, but how do we actually do it? Here are some heuristics that I’ve been using & that others have recommended:
For any project idea, ask yourself “how could this be 10X or 100X bigger or more impactful?”
For any project idea, ask yourself “how could this be better if I had 10X or 100X more funding?
Imagine that this project or career went really, really well. What happened? What does the 90th or 99th percentile version of this look like? (Or, more simply, what does the best-case scenario look like?)
Spend 10 minutes brainstorming ideas– including ideas that seem crazy or wild– and then spend 10 minutes thinking about how each idea can be implemented (see also these LessWrong posts about babbling/pruning).
Ask yourself “if something came along that solved this entire problem, what would it be?”
Example: Sydney V founded Uncommon Sense, a summer program in which high school students learn about rationality and longtermism. The program reached about 20 students in its first year. Sydney’s first reaction was to think “ah, great, next year, we can do a modified version of Uncommon Sense and expand to about 30 students.”
Then, after being prompted by other EAs, she began to think more ambitiously about her plan to scale, and she asked herself how the project could be much more impactful. She realized that she could work with other EAs to turn Uncommon Sense into a large program that could potentially receive thousands of applications and hundreds of attendees. She’s now working on this (highly ambitious) high school outreach project.
Takeaway #2: Take more (calculated) risks
TLDR: Consider upsides and don’t be overly afraid of [specific types of] failure. (see also this 80k post about ambition & risks). Note: Many points in this section come from, or are inspired by, a talk by Jonas Vollmer.
Consider Upsides
Here’s a conversation that I have heard in my head, and I’ve also heard from other EAs:
Me: Here’s an idea!
Risk-averse me: That’s a cool idea. But here are a few ways it can go wrong. Also, you haven’t done something like [idea] before. Also, base rates for [idea] succeeding are low. And [idea] could even cause harm—especially if [bad thing #1] happens or [bad thing #2] happens.
Me: Hm, yeah. Good points. I should probably think of a different idea.
The problem is that there’s a crucial step that’s missing. Many people find it more
natural to think of downsides (how could this go wrong?) than upsides (how could this go right?). But when estimating the expected value of a project, it’s important to weigh the upsides against the downsides.
This idea is deceptively simple and seems obvious when spelled out in writing. And yet, in conversations with myself and others, I routinely find it takes more effort and intentionality to dive into the upsides. Here’s how the conversation could go:
Me: Here’s an idea!
Upside me: That’s a cool idea. Here’s how it can go right. If it went reallywell, it could generate impact [like this] and [like that]. And then version 2.0 would be even better in [this way] and [that way], and it could scale up to achieve [concrete description of how this achieves impact].
Risk-averse me: True, but here are some possible risks. I’ll divide them into [ways it could generate no impact] and [ways it could be net negative]. Now, let’s focus on [ways it could be net negative]. Here are [people you could talk to] and [implementation strategies you could use] to reduce those risks. Now, let’s start weighing the upsides and the downsides…
Don’t be too afraid of [specific types of] failure
There are at least three ways a project could fail:
Neutral Failures: The project does not deliver on its expected value. The project has a disappointingly small impact or a “net neutral” effect. (Example: You launch a start-up, and it fails to attract investors. You waste 3 months of your time, and your start-up doesn’t help anyone).
Reversible Net Negative Failures: The project actively causes harm, but the harm is reversible. (Example: You release a research report to 50 people which contains some important errors. You catch the mistake and issue a correction, which nearly everyone receives.)
Irreversible Net Negative Failures: The project actively causes harm, and the harm is not reversible. (Example: You launch a messaging campaign about effective altruism and introduce 10,000 people to EA. The messaging is poorly crafted, easy to misconstrue, and offensive. 10,000 people develop negative first impressions of EA that are unlikely to change.)
The takeaway here is that most people place too much weight on neutral failures and
reversible net negative failures. If a project has a 10% chance of generating a massive amount of impact, and a 90% chance of failing in a “neutral failure” mode, the expected value of the project is still massive. The expected value falls by an order of magnitude if there’s a 10% chance of generating massive impact but a 9% chance of causing an (equally massive) irreversible negative impact.
In short: breaking risks down into the different types of failure can be useful. We should generally be deterred less by neutral failures and reversible net negative failures and pay careful attention to irreversible net negative failures. (Though we should also be willing to accept some risk of irreversible net negative failures, especially when the possible upside is high. Conversely, if the upside is relatively low, even neutral failures or reversible failures should be enough to deter us).
Takeaway #3: Be more entrepreneurial and agentic (do things!)
Imagine a world in which there are lots of EA organizations with lots of open jobs. The movement has a surplus of managers and mentors, and it’s looking for conscientious people who can perform well under structured and supervised environments.
Now, imagine a world in which there are a small handful of EA organizations with limited capacity to train and manage new hires. The movement is people-constrained, especially manager-constrained, and it’s looking for self-driven people who can initiate impactful projects with limited supervision and oversight.
One takeaway is that EA is currently much closer to world #2. Effective altruism is in an interesting position: there is a relatively large amount of funding available for people to start impactful projects (particularly relating to longtermism) and a relatively limited number of people who can work independently to take on these opportunities.
This relates to a topic that has been on the EA forum: Why is it so hard to get jobs at EA orgs (see here and here; also see here for a recent discussion)? It seems strange, especially given the funding situation & the importance of giving people opportunities to do highly impactful work.
The situation becomes clearer when recognizing that one major bottleneck for EA orgs is the # of managers/mentors. It takes a lot of time (and a particular skillset/disposition) to supervise new people.
Given that we are in world #2, what should we do? How can we prepare ourselves to be most impactful given the current climate of effective altruism?
Brainstorm independent project ideas
What would you do if you had to start a new initiative in the next 3-6 months? Reflecting regularly on this question seems like it would a) generate some plausibly useful ideas and b) develop “thinking skills” that will be helpful even if you don’t pursue any of these ideas.
If you’re looking for inspiration, I recommend checking out previous grants made by the EA Infrastructure Fund (EAIF). EAIF releases Payout Reports (see example from the Long-Term Future Fund here). The reports describe grants that they made & include brief analyses of each project (e.g., describing potential upsides and potential concerns). There are also lists of research ideas (see an example here) and project ideas (see an example here). Note that I think it’s probably good to spend some time brainstorming before you do a deep dive into these resources, since they could anchor you & reduce your likelihood of finding novel/creative ideas.
Cultivate agentic and entrepreneurial skills
Some people are probably naturally very agentic. I’ve met many EAs, though, who did not feel agentic for much of their lives, and then they experienced a sudden growth in agency/doing-stuff-on-my-own-ness. What are some ways we can cultivate this?
One idea is to do “mini-projects”—things that require low amounts of time but force you to practice independent thinking/doing skills. In my view, the primary purpose of a mini-project is to help you developaptitudes (so it’s fine if the project doesn’t end up having any direct impact). Examples:
Write an EA Forum post (you don’t have to post it & it doesn’t have to be “good”).
Do 2 hours of research on a question you find interesting (e.g., are fish conscious?)
Review and critique a research report (i.e., red-teaming; see here).
Start a reading group or co-working group.
Reach out to someone you respect to get feedback on an idea or career plan.
Reach out to other EAs to see what ideas they have (e.g., the Norwegian University of Science and Technology EA group seems to have a culture that emphasizes mini-projects, and you can learn more here).
Learn and practice rationality techniques (Goal Factoring seems especially relevant for learning to develop agentic/creative goals. Also, the entire Hammertime series.)
Ask other people (and yourself) questions that foster agency.
I love this list by Chana Messinger. Example questions:
Yeah, someone *should* do that. Why not you?
That’s a great thought—have you written it up somewhere? I’d be excited to share it if so.
You want to do that but don’t think you will? Do you want to make a concrete plan now?
Collect Feedback
Get feedback early and often! Some tips:
Present people with multiple ideas. I think it is often easier to say “X idea is better than Y idea” than to discuss X idea in a vacuum. Also, if you only have one idea, and it’s bad, the other person might not want to tell you that directly, because then they’re crushing your only idea.)
Ask for feedback early. One easy way to acquire quick feedback is by posting questions to the EA Forum. You could also reach out to people at student groups, city chapters, and social media pages (ex: Effective Altruism Editing and Review).
Try to understand why people believe the things they do. A classic failure mode is “I told [person who I think has good judgment] about [idea], and they [loved/hated it]! So now I’ve updated in the direction of it being [good/bad].
Instead of doing this, I think it can be quite important to understand the feedback. This often means asking lots of clarification questions, offering pushback, and finding sources of disagreement (e.g., if X was not an issue, how much would your opinion of the idea change?). See these posts for more on this topic.
Takeaway #4: Apply for Funding
Many EAs at the retreats and EAG argued that EAs should generally be more willing to apply for funding. Some people feel nervous taking “EA money,” especially when they’re not confident that they’ll succeed, or they’re reluctant to secure money to fund their time. Here are a few things I learned about funding (see also this post which raises points about EA funds). Note that many of the details depend on the particular project and the particular funder—but I’ve tried to list some fairly generalizable principles:
Applying for funding is a useful way to receive feedback and support. Funders will know if others are working on similar projects, connect you with people with relevant expertise, and help you refine your ideas.
Funders are willing to cover the cost of your time (indeed, this is often the largest expense). In some cases, this even involves time you spend planning, brainstorming, prioritizing between ideas, and learning new skills. The underlying logic is that a) your time is valuable and b) receiving funding makes you more likely to follow through in a high-quality way.
Applying for funding is quick. Many grants are much less time-consuming than other kinds of grants (e.g., academic grants or foundation grants). Applying to EA Funds, for instance, is meant to take <2 hours.
The biggest takeaway is that you should probably be more willing to apply for funding.
See a list of funding opportunities here.Also including a quote from the post:
I strongly encourage people to consider applying for one or more of these things. Given how quick applying often is and how impactful funded projects often are, applying is often worthwhile in expectation even if your odds of getting funding aren’t very high. (I think the same basic logic applies to job applications.)
Takeaway #5: Value your time
How much money would you trade to free up an hour of your time? (see this post and this ClearerThinking module for some advice on how to answer this question).
A related question: how much money should the EA community be willing to pay to free up an hour of time for a highly engaged EA? In a 2018 survey by 80,000 Hours, EA org leaders were asked to estimate the amount of money they would trade to give up a recent junior/senior hire for three years (see exact wording here). The median response was $450,000/3 years for a junior hire (about $75/hour) and $3,000,000/3 years (about $500/hour) for a senior hire. Note that the figures are higher when using averages instead of medians ($175/hour for a junior hire and $1,233/hour for a senior hire).
My best guess is that these numbers have increased from 2018, in large part due to the increased availability of money and the fact that many areas seem skill-constrained/people-constrained (see this post).
Okay, so valuing my time is important, and there are some heuristics that I can use to estimate an hour of my time. What do I do next?
Find ways to spend money to save time. This postoffers several suggestions, such as paying for laundry services, taking Ubers, and eating takeout. You may also create your own list of ways you could spend money to save time.
Takeaway #6: Reach out to EAs
My experiences at EAG and the retreats reinforced a realization that I had at the EA student summit: Many EAs want to talk to you (yes, that means you), and you should have a rather low bar for reaching out to them.
I thought that [EAs] would have far better things to do with their time than talk to me. I need to wait until I have a really impressive idea before I request the time of other EAs—I could distract them from discovering the next highly effective charity or the solution to AI safety!
Nearly all of my experiences at the summit went against this idea. People wanted to talk to me, and others, about raw, unpolished ideas. Almost every EA I spoke to—including the “big names”—seemed authentically and intrinsically motivated to talk to students about their interests and ideas. I honestly think this was my biggest surprise of the conference—there are so many EAs who would genuinely like to talk to you.
Caveats
One of the challenges of writing these takeaways is that I have no idea who you are. The advice in this post is directed at the “median EA” (and some of it is directed at the “median EA attending EAG/retreats.”)
As a result, some of these takeaways might be more relevant to you than others. If you are highly more willing to take risks than the median EA, then “taking more risks” might not be useful advice. If you are relatively more reluctant to reach out to others than the median EA, then “reach out to EAs” may be especially useful advice.
Other caveats:
My EAG experience nearly exclusively consisted of 1-on-1s.
These 1-on-1s disproportionately focused on EA meta, community building, and community health. The student retreats also focused on these topics. I encourage other EAG attendees to share takeaways/insights about other cause areas.
These are takeaways that I found helpful, but they are not universal truths. All of this should be interpreted with caution and skepticism.
What are your takeaways?
EAG and the student group retreats were energizing, inspiring, and informative. I would love to hear from others who attended EAG or retreats. Some prompts to get you thinking (see also this post for additional prompts & a $500 bounty):
What was the most interesting or surprising thing you learned?
What were the best 30 minutes of your EAG/retreat experience?
Do you disagree with any of my claims/takeaways?
I’m grateful to Chana Messinger, Olivia Jimenez, Ashley Lin, Liam Alexander, Jack Goldberg, Richard Ngo, Lizka, and Jonas Vollmer for providing feedback on this post. I’m also grateful to everyone involved in organizing EAG and EA retreats.
Six Takeaways from EA Global and EA Retreats
I attended EAG London, as well as four retreats for student group organizers (these were focused mostly on community building & how to make strong university groups).
In this post, I summarize some of my takeaways and lessons learned. I also offer a few suggestions, heuristics, and action items for those who want to put some of these ideas into practice.
Here are my takeaways:
Think Bigger
Take More [Calculated] Risks
Be More Entrepreneurial and Agentic (Do Things)
Apply for Funding
Value Your Time
Reach out to EAs
Takeaway #1: Think bigger
In the early 2010s, EA was an extremely young movement with some idealistic thinkers. In 2021, it is still a very young movement, and it still has idealistic thinkers, but it also has a lot more energy and funding.
We can think bigger, grander, and more ambitiously about how to improve the present
and the future. There is a particular interest in megaprojects— highly impactful projects that could scale widely, even if they require large amounts of funding.
Okay, thinking bigger and grander sounds good in theory, but how do we actually do it? Here are some heuristics that I’ve been using & that others have recommended:
For any project idea, ask yourself “how could this be 10X or 100X bigger or more impactful?”
For any project idea, ask yourself “how could this be better if I had 10X or 100X more funding?
Imagine that this project or career went really, really well. What happened? What does the 90th or 99th percentile version of this look like? (Or, more simply, what does the best-case scenario look like?)
Spend 10 minutes brainstorming ideas– including ideas that seem crazy or wild– and then spend 10 minutes thinking about how each idea can be implemented (see also these LessWrong posts about babbling/pruning).
Ask yourself “if something came along that solved this entire problem, what would it be?”
Example: Sydney V founded Uncommon Sense, a summer program in which high school students learn about rationality and longtermism. The program reached about 20 students in its first year. Sydney’s first reaction was to think “ah, great, next year, we can do a modified version of Uncommon Sense and expand to about 30 students.”
Then, after being prompted by other EAs, she began to think more ambitiously about her plan to scale, and she asked herself how the project could be much more impactful. She realized that she could work with other EAs to turn Uncommon Sense into a large program that could potentially receive thousands of applications and hundreds of attendees. She’s now working on this (highly ambitious) high school outreach project.
Takeaway #2: Take more (calculated) risks
TLDR: Consider upsides and don’t be overly afraid of [specific types of] failure. (see also this 80k post about ambition & risks). Note: Many points in this section come from, or are inspired by, a talk by Jonas Vollmer.
Consider Upsides
Here’s a conversation that I have heard in my head, and I’ve also heard from other EAs:
Me: Here’s an idea!
Risk-averse me: That’s a cool idea. But here are a few ways it can go wrong. Also, you haven’t done something like [idea] before. Also, base rates for [idea] succeeding are low. And [idea] could even cause harm—especially if [bad thing #1] happens or [bad thing #2] happens.
Me: Hm, yeah. Good points. I should probably think of a different idea.
The problem is that there’s a crucial step that’s missing. Many people find it more
natural to think of downsides (how could this go wrong?) than upsides (how could this go right?). But when estimating the expected value of a project, it’s important to weigh the upsides against the downsides.
This idea is deceptively simple and seems obvious when spelled out in writing. And yet, in conversations with myself and others, I routinely find it takes more effort and intentionality to dive into the upsides. Here’s how the conversation could go:
Me: Here’s an idea!
Upside me: That’s a cool idea. Here’s how it can go right. If it went really well, it could generate impact [like this] and [like that]. And then version 2.0 would be even better in [this way] and [that way], and it could scale up to achieve [concrete description of how this achieves impact].
Risk-averse me: True, but here are some possible risks. I’ll divide them into [ways it could generate no impact] and [ways it could be net negative]. Now, let’s focus on [ways it could be net negative]. Here are [people you could talk to] and [implementation strategies you could use] to reduce those risks. Now, let’s start weighing the upsides and the downsides…
Don’t be too afraid of [specific types of] failure
There are at least three ways a project could fail:
Neutral Failures: The project does not deliver on its expected value. The project has a disappointingly small impact or a “net neutral” effect. (Example: You launch a start-up, and it fails to attract investors. You waste 3 months of your time, and your start-up doesn’t help anyone).
Reversible Net Negative Failures: The project actively causes harm, but the harm is reversible. (Example: You release a research report to 50 people which contains some important errors. You catch the mistake and issue a correction, which nearly everyone receives.)
Irreversible Net Negative Failures: The project actively causes harm, and the harm is not reversible. (Example: You launch a messaging campaign about effective altruism and introduce 10,000 people to EA. The messaging is poorly crafted, easy to misconstrue, and offensive. 10,000 people develop negative first impressions of EA that are unlikely to change.)
The takeaway here is that most people place too much weight on neutral failures and
reversible net negative failures. If a project has a 10% chance of generating a massive amount of impact, and a 90% chance of failing in a “neutral failure” mode, the expected value of the project is still massive. The expected value falls by an order of magnitude if there’s a 10% chance of generating massive impact but a 9% chance of causing an (equally massive) irreversible negative impact.
In short: breaking risks down into the different types of failure can be useful. We should generally be deterred less by neutral failures and reversible net negative failures and pay careful attention to irreversible net negative failures. (Though we should also be willing to accept some risk of irreversible net negative failures, especially when the possible upside is high. Conversely, if the upside is relatively low, even neutral failures or reversible failures should be enough to deter us).
Takeaway #3: Be more entrepreneurial and agentic (do things!)
Imagine a world in which there are lots of EA organizations with lots of open jobs. The movement has a surplus of managers and mentors, and it’s looking for conscientious people who can perform well under structured and supervised environments.
Now, imagine a world in which there are a small handful of EA organizations with limited capacity to train and manage new hires. The movement is people-constrained, especially manager-constrained, and it’s looking for self-driven people who can initiate impactful projects with limited supervision and oversight.
One takeaway is that EA is currently much closer to world #2. Effective altruism is in an interesting position: there is a relatively large amount of funding available for people to start impactful projects (particularly relating to longtermism) and a relatively limited number of people who can work independently to take on these opportunities.
This relates to a topic that has been on the EA forum: Why is it so hard to get jobs at EA orgs (see here and here; also see here for a recent discussion)? It seems strange, especially given the funding situation & the importance of giving people opportunities to do highly impactful work.
The situation becomes clearer when recognizing that one major bottleneck for EA orgs is the # of managers/mentors. It takes a lot of time (and a particular skillset/disposition) to supervise new people.
Given that we are in world #2, what should we do? How can we prepare ourselves to be most impactful given the current climate of effective altruism?
Brainstorm independent project ideas
What would you do if you had to start a new initiative in the next 3-6 months? Reflecting regularly on this question seems like it would a) generate some plausibly useful ideas and b) develop “thinking skills” that will be helpful even if you don’t pursue any of these ideas.
If you’re looking for inspiration, I recommend checking out previous grants made by the EA Infrastructure Fund (EAIF). EAIF releases Payout Reports (see example from the Long-Term Future Fund here). The reports describe grants that they made & include brief analyses of each project (e.g., describing potential upsides and potential concerns). There are also lists of research ideas (see an example here) and project ideas (see an example here). Note that I think it’s probably good to spend some time brainstorming before you do a deep dive into these resources, since they could anchor you & reduce your likelihood of finding novel/creative ideas.
Cultivate agentic and entrepreneurial skills
Some people are probably naturally very agentic. I’ve met many EAs, though, who did not feel agentic for much of their lives, and then they experienced a sudden growth in agency/doing-stuff-on-my-own-ness. What are some ways we can cultivate this?
One idea is to do “mini-projects”—things that require low amounts of time but force you to practice independent thinking/doing skills. In my view, the primary purpose of a mini-project is to help you develop aptitudes (so it’s fine if the project doesn’t end up having any direct impact). Examples:
Write an EA Forum post (you don’t have to post it & it doesn’t have to be “good”).
Do 2 hours of research on a question you find interesting (e.g., are fish conscious?)
Review and critique a research report (i.e., red-teaming; see here).
Start a reading group or co-working group.
Reach out to someone you respect to get feedback on an idea or career plan.
Reach out to other EAs to see what ideas they have (e.g., the Norwegian University of Science and Technology EA group seems to have a culture that emphasizes mini-projects, and you can learn more here).
Learn and practice rationality techniques (Goal Factoring seems especially relevant for learning to develop agentic/creative goals. Also, the entire Hammertime series.)
Check out recent posts with the “get involved” tag on the EA Forum.
Read stuff about agency, entrepreneurship, and making progress. Examples include essays by Paul Graham and Inadequate Equilibria.
Ask other people (and yourself) questions that foster agency.
I love this list by Chana Messinger. Example questions:
Yeah, someone *should* do that. Why not you?
That’s a great thought—have you written it up somewhere? I’d be excited to share it if so.
You want to do that but don’t think you will? Do you want to make a concrete plan now?
Collect Feedback
Get feedback early and often! Some tips:
Present people with multiple ideas. I think it is often easier to say “X idea is better than Y idea” than to discuss X idea in a vacuum. Also, if you only have one idea, and it’s bad, the other person might not want to tell you that directly, because then they’re crushing your only idea.)
Ask for feedback early. One easy way to acquire quick feedback is by posting questions to the EA Forum. You could also reach out to people at student groups, city chapters, and social media pages (ex: Effective Altruism Editing and Review).
Try to understand why people believe the things they do. A classic failure mode is “I told [person who I think has good judgment] about [idea], and they [loved/hated it]! So now I’ve updated in the direction of it being [good/bad].
Instead of doing this, I think it can be quite important to understand the feedback. This often means asking lots of clarification questions, offering pushback, and finding sources of disagreement (e.g., if X was not an issue, how much would your opinion of the idea change?). See these posts for more on this topic.
Takeaway #4: Apply for Funding
Many EAs at the retreats and EAG argued that EAs should generally be more willing to apply for funding. Some people feel nervous taking “EA money,” especially when they’re not confident that they’ll succeed, or they’re reluctant to secure money to fund their time. Here are a few things I learned about funding (see also this post which raises points about EA funds). Note that many of the details depend on the particular project and the particular funder—but I’ve tried to list some fairly generalizable principles:
Applying for funding is a useful way to receive feedback and support. Funders will know if others are working on similar projects, connect you with people with relevant expertise, and help you refine your ideas.
Funders are willing to cover the cost of your time (indeed, this is often the largest expense). In some cases, this even involves time you spend planning, brainstorming, prioritizing between ideas, and learning new skills. The underlying logic is that a) your time is valuable and b) receiving funding makes you more likely to follow through in a high-quality way.
Applying for funding is quick. Many grants are much less time-consuming than other kinds of grants (e.g., academic grants or foundation grants). Applying to EA Funds, for instance, is meant to take <2 hours.
The biggest takeaway is that you should probably be more willing to apply for funding.
See a list of funding opportunities here. Also including a quote from the post:
Takeaway #5: Value your time
How much money would you trade to free up an hour of your time? (see this post and this ClearerThinking module for some advice on how to answer this question).
A related question: how much money should the EA community be willing to pay to free up an hour of time for a highly engaged EA? In a 2018 survey by 80,000 Hours, EA org leaders were asked to estimate the amount of money they would trade to give up a recent junior/senior hire for three years (see exact wording here). The median response was $450,000/3 years for a junior hire (about $75/hour) and $3,000,000/3 years (about $500/hour) for a senior hire. Note that the figures are higher when using averages instead of medians ($175/hour for a junior hire and $1,233/hour for a senior hire).
My best guess is that these numbers have increased from 2018, in large part due to the increased availability of money and the fact that many areas seem skill-constrained/people-constrained (see this post).
Okay, so valuing my time is important, and there are some heuristics that I can use to estimate an hour of my time. What do I do next?
Find ways to spend money to save time. This post offers several suggestions, such as paying for laundry services, taking Ubers, and eating takeout. You may also create your own list of ways you could spend money to save time.
Takeaway #6: Reach out to EAs
My experiences at EAG and the retreats reinforced a realization that I had at the EA student summit: Many EAs want to talk to you (yes, that means you), and you should have a rather low bar for reaching out to them.
I have even more confidence in my reflection from last year’s student summit:
Caveats
One of the challenges of writing these takeaways is that I have no idea who you are. The advice in this post is directed at the “median EA” (and some of it is directed at the “median EA attending EAG/retreats.”)
As a result, some of these takeaways might be more relevant to you than others. If you are highly more willing to take risks than the median EA, then “taking more risks” might not be useful advice. If you are relatively more reluctant to reach out to others than the median EA, then “reach out to EAs” may be especially useful advice.
Other caveats:
My EAG experience nearly exclusively consisted of 1-on-1s.
These 1-on-1s disproportionately focused on EA meta, community building, and community health. The student retreats also focused on these topics. I encourage other EAG attendees to share takeaways/insights about other cause areas.
These are takeaways that I found helpful, but they are not universal truths. All of this should be interpreted with caution and skepticism.
What are your takeaways?
EAG and the student group retreats were energizing, inspiring, and informative. I would love to hear from others who attended EAG or retreats. Some prompts to get you thinking (see also this post for additional prompts & a $500 bounty):
What was the most interesting or surprising thing you learned?
What were the best 30 minutes of your EAG/retreat experience?
Do you disagree with any of my claims/takeaways?
I’m grateful to Chana Messinger, Olivia Jimenez, Ashley Lin, Liam Alexander, Jack Goldberg, Richard Ngo, Lizka, and Jonas Vollmer for providing feedback on this post. I’m also grateful to everyone involved in organizing EAG and EA retreats.