I think it’s important to distinguish people’s expectations and the reality of what gets rewarded. Both matter: if people expect something to be unrewarding, they won’t do it even if it would be appreciated; and perhaps even worse, if people expect to get rewarded for something but in fact there is limited support, they may waste time going down a dead end.
Another axis worth thinking about is what kind of rewards are given. The post prompts for social rewards, but I’m not sure why we should focus on this specifically: things like monetary compensation, work-life balance, location, etc all matter and are determined at least in part by EA orgs and grantmakers decisions. Even if we focus on social rewards, does this look like gratitude from your colleagues, being invited to interesting events, having social media followers, a set of close friends you like, …? All of these can be rewarding, but the amount of weight people put on each varies a lot. I think it helps to be precise here, as otherwise two people might disagree about how rewarding a role is, even though they agree about the facts of the matter.
Off the top of my head, categories of people who I think often get rewarded too much / too little by the movement.
Overrated: AI safety researchers
I am an AI safety researcher, so I’ll start with deprecating myself! To be clear, I think AI safety should be a priority, and people who are making progress here deserve resources to let them scale up their research. But it seems to sometimes be put on a pedestal I don’t think it really belongs to. Biosecurity, cause prioritization, improving institutional decision making, etc all seem within an order of magnitude of AI at least—and people’s relative fit for the area can dwarf that. I think this is one of the cases where perception is more skewed than reality: e.g. although the bar for funding AI safety research does seem a bit lower than other areas, I’ve generally seen promising projects / people in other areas be able to attract funding relatively easily too.
I’d also like to see more critical evaluation of people’s research agendas. I see more deference than I’m comfortable with. It’s a tricky balance: we don’t want to strangle a research agenda at birth just because it doesn’t fit our preconceptions. So I think it makes sense to give individuals a decent amount of runway to pursue novel approaches. But I think accountability can actually help make people more productive, both by motivating them and giving useful feedback. Given time constraints I think it’s OK to sometimes fund or otherwise support people without having a good inside view of why their work matters, but I’d like to see people be more explicit about that in their own reasoning and communication with others so we don’t get a positive feedback loop. Concretely I’ve fairly often wished I could fund someone without giving an implied endorsement—not because I think their work is bad, I’m just not confident.
Overrated: Parroting popular arguments
There’s often a lot of deference to the opinions of high-status figures in the EA community. I don’t think this is necessarily bad per se: no one has time to look into every possible issue, so relying on expert opinion is a necessary shortcut. However, the question then arises, how are the so-called experts selected?
A worrying trend I’ve seen is that people who agree with the current in-vogue opinion and parrot the popular arguments often seem to be given more epistemic credit than they deserve. While those who try hard to form their own opinions, and sometimes make mistakes, are more likely to be viewed with skepticism. The tricky thing here is the “parrots” are right more often than the “independent thinkers”—but the marginal contribution of the parrots contribution to the debate is approximately zero.
I’m not sure how to fix this. I think one thing that can help is rewarding people for having good reasons for doing what they’re working on, rather than you agreeing with their outcome per se. So, if I meet someone who is e.g. working on AI safety but does not seem to have a strong grasp of the arguments for it or why they’re a good fit for it, I might encourage them to look at other options. Whereas if I meet someone working on e.g. asteroid deflection, which I’d personally guess is much less impactful, I’d be supportive of them if they had decent responses to my critique (even if I’m not convinced by the response).
Underrated: Micro-entrepreneurship
A key part of entrepeneurship is identifying an opportunity others are overlooking, and then taking initiative to exploit that opportunity. Entrepreneurs in the “Silicon Valley startup” form are adequately rewarded (although I’ll note it’s common for founders to face intense skepticism early on, before the idea is validated). But there’s opportunities to apply this style of thinking and work at varying scales: setting up a new community event, helping an org you join run better, etc. These are often taken for granted, especially since once the idea has been executed, it may often seem trivial. But such “obvious” ideas frequently languish for many years as no one bothers to solve them.
For example, during my PhD at CHAI, I helped scale-up an internship program, fundraised for and helped run a program to give cash grants to other PhD students (not myself) who were being held back by funding constraints, helped lead meetings to help integrate new PhD students, fundraised for and helped set up a compute cluster, etc. None of these were particularly hard: I believe most other people in the group could have done them. But they didn’t, and I expect <50% of them would have happened if I hadn’t taken initiative.
I wasn’t rewarded for these particularly, and they’ll do little to help me in a research career. But I actually count this as a success case—in many orgs I wouldn’t have even had the freedom to take these actions! So I’d encourage leaders of orgs to at the least try to give your individual contributors freedom to take leadership of useful projects, and where possible try to reward them for it, even if it’s not incentivized by the broader ecosystem.
Underrated: Direct work outside the community
Working directly for an EA org is rewarding in many ways (social connection, prestige), although by no means all (compensation low to middling relative to what many of the individuals could earn). But there’s lots of direct paths to impact that don’t involve working with EAs!
For example, if you want to improve institution decision-making, it might make sense to spend at least some time working at the kind of large governmental institution you seek to later reform. Even the most ardent civil servant would not claim that large government bureaucracies are a particularly exciting place to work.
Similarly, I see a lot of people working on AI safety at a handful of labs that have made safety a priority: e.g. DeepMind, OpenAI, Anthropic, Redwood. This makes a decent amount of sense, but might there not be considerable value working at a company that might build powerful AI which currently has few internal experts on safety, such as Google Brain or Meta AI Research? This isn’t for everyone: you ideally should have some seniority already, and need strong communication skills to get leadership and other teams excited by your work. But I expect it could be higher impact, by getting a new group of people to work on safety problems, and helping ensure that any systems those labs build are aligned.
One thing that could help here is having a strong community outside of workplaces and narrow geographical hubs. And also evaluating people’s career more by their long-term trajectory, and not just what they’re working on right now, noting that direct impact outside EA orgs will often by necessity involve some work that by our lights would be of limited impact.
I think it’s important to distinguish people’s expectations and the reality of what gets rewarded. Both matter: if people expect something to be unrewarding, they won’t do it even if it would be appreciated; and perhaps even worse, if people expect to get rewarded for something but in fact there is limited support, they may waste time going down a dead end.
Another axis worth thinking about is what kind of rewards are given. The post prompts for social rewards, but I’m not sure why we should focus on this specifically: things like monetary compensation, work-life balance, location, etc all matter and are determined at least in part by EA orgs and grantmakers decisions. Even if we focus on social rewards, does this look like gratitude from your colleagues, being invited to interesting events, having social media followers, a set of close friends you like, …? All of these can be rewarding, but the amount of weight people put on each varies a lot. I think it helps to be precise here, as otherwise two people might disagree about how rewarding a role is, even though they agree about the facts of the matter.
Off the top of my head, categories of people who I think often get rewarded too much / too little by the movement.
Overrated: AI safety researchers
I am an AI safety researcher, so I’ll start with deprecating myself! To be clear, I think AI safety should be a priority, and people who are making progress here deserve resources to let them scale up their research. But it seems to sometimes be put on a pedestal I don’t think it really belongs to. Biosecurity, cause prioritization, improving institutional decision making, etc all seem within an order of magnitude of AI at least—and people’s relative fit for the area can dwarf that. I think this is one of the cases where perception is more skewed than reality: e.g. although the bar for funding AI safety research does seem a bit lower than other areas, I’ve generally seen promising projects / people in other areas be able to attract funding relatively easily too.
I’d also like to see more critical evaluation of people’s research agendas. I see more deference than I’m comfortable with. It’s a tricky balance: we don’t want to strangle a research agenda at birth just because it doesn’t fit our preconceptions. So I think it makes sense to give individuals a decent amount of runway to pursue novel approaches. But I think accountability can actually help make people more productive, both by motivating them and giving useful feedback. Given time constraints I think it’s OK to sometimes fund or otherwise support people without having a good inside view of why their work matters, but I’d like to see people be more explicit about that in their own reasoning and communication with others so we don’t get a positive feedback loop. Concretely I’ve fairly often wished I could fund someone without giving an implied endorsement—not because I think their work is bad, I’m just not confident.
Overrated: Parroting popular arguments
There’s often a lot of deference to the opinions of high-status figures in the EA community. I don’t think this is necessarily bad per se: no one has time to look into every possible issue, so relying on expert opinion is a necessary shortcut. However, the question then arises, how are the so-called experts selected?
A worrying trend I’ve seen is that people who agree with the current in-vogue opinion and parrot the popular arguments often seem to be given more epistemic credit than they deserve. While those who try hard to form their own opinions, and sometimes make mistakes, are more likely to be viewed with skepticism. The tricky thing here is the “parrots” are right more often than the “independent thinkers”—but the marginal contribution of the parrots contribution to the debate is approximately zero.
I’m not sure how to fix this. I think one thing that can help is rewarding people for having good reasons for doing what they’re working on, rather than you agreeing with their outcome per se. So, if I meet someone who is e.g. working on AI safety but does not seem to have a strong grasp of the arguments for it or why they’re a good fit for it, I might encourage them to look at other options. Whereas if I meet someone working on e.g. asteroid deflection, which I’d personally guess is much less impactful, I’d be supportive of them if they had decent responses to my critique (even if I’m not convinced by the response).
Underrated: Micro-entrepreneurship
A key part of entrepeneurship is identifying an opportunity others are overlooking, and then taking initiative to exploit that opportunity. Entrepreneurs in the “Silicon Valley startup” form are adequately rewarded (although I’ll note it’s common for founders to face intense skepticism early on, before the idea is validated). But there’s opportunities to apply this style of thinking and work at varying scales: setting up a new community event, helping an org you join run better, etc. These are often taken for granted, especially since once the idea has been executed, it may often seem trivial. But such “obvious” ideas frequently languish for many years as no one bothers to solve them.
For example, during my PhD at CHAI, I helped scale-up an internship program, fundraised for and helped run a program to give cash grants to other PhD students (not myself) who were being held back by funding constraints, helped lead meetings to help integrate new PhD students, fundraised for and helped set up a compute cluster, etc. None of these were particularly hard: I believe most other people in the group could have done them. But they didn’t, and I expect <50% of them would have happened if I hadn’t taken initiative.
I wasn’t rewarded for these particularly, and they’ll do little to help me in a research career. But I actually count this as a success case—in many orgs I wouldn’t have even had the freedom to take these actions! So I’d encourage leaders of orgs to at the least try to give your individual contributors freedom to take leadership of useful projects, and where possible try to reward them for it, even if it’s not incentivized by the broader ecosystem.
Underrated: Direct work outside the community
Working directly for an EA org is rewarding in many ways (social connection, prestige), although by no means all (compensation low to middling relative to what many of the individuals could earn). But there’s lots of direct paths to impact that don’t involve working with EAs!
For example, if you want to improve institution decision-making, it might make sense to spend at least some time working at the kind of large governmental institution you seek to later reform. Even the most ardent civil servant would not claim that large government bureaucracies are a particularly exciting place to work.
Similarly, I see a lot of people working on AI safety at a handful of labs that have made safety a priority: e.g. DeepMind, OpenAI, Anthropic, Redwood. This makes a decent amount of sense, but might there not be considerable value working at a company that might build powerful AI which currently has few internal experts on safety, such as Google Brain or Meta AI Research? This isn’t for everyone: you ideally should have some seniority already, and need strong communication skills to get leadership and other teams excited by your work. But I expect it could be higher impact, by getting a new group of people to work on safety problems, and helping ensure that any systems those labs build are aligned.
One thing that could help here is having a strong community outside of workplaces and narrow geographical hubs. And also evaluating people’s career more by their long-term trajectory, and not just what they’re working on right now, noting that direct impact outside EA orgs will often by necessity involve some work that by our lights would be of limited impact.
Usually when I see a comment this long, it is someone trying to show off / hijack a thread. But this comment is actually very useful. Thanks Adam!
Wow! Spot-on Adam, I wanted to respond to this question but no need to anymore after reading this