(I wasn’t going to comment, but rn I’m the only person who disagrees)
Some reasons against the current proportion of e2g’ers being too low. * There aren’t many salient examples of people doing direct work that I want to switch to e2g. * Doing direct work gives you a lot more exposure to great giving opportunities. * Many people doing direct work I know wouldn’t earn dramatically more if they switched to e2g. * Most people doing e2g aren’t doing super ambitious e2g (e.g. earning putting themselves in a position to donate >> $1M/year). * E2g is often less well optimised for learning useful object-level knowledge and skills than direct work.
* Some EAs were early at AI companies and now have net worths of >> $100M—they will likely spend some of this on EA aligned philanthropy * There are already billions of dollars in philanthropic capital for EA-aligned projects, and basically all funders I’ve spoken to feel that there aren’t enough very exciting fundable projects—so directionally, I’d feel a bit surprised if fewer people should be following paths that are less optimised for directly working on exciting projects.
Otoh, if someone has a very small chance of donating as much as Dustin Moskovitz did, then it’s very plausible they should do that—I certainly wouldn’t discourage people from earning to give if they are succeeding at it.
This is a cool list. I am unsure if this one is very useful:
* There aren’t many salient examples of people doing direct work that I want to switch to e2g.
This is because I think that we are not able to evaluate what replacement candidate would fill the role if the employed EA had done e2g. My understanding is that many extremely talented EAs are having trouble finding jobs within EA, and that many of them are capable of work at the quality that current EA employees do.
This reason I think bites both ways:
* E2g is often less well optimised for learning useful object-level knowledge and skills than direct work.
My understanding is that many non-EA jobs provide useful knowledge and skills that are underrepresented in current EA organizations, albeit my impression is that this is improving as EA organizations professionalize. For example, I wouldn’t be surprised if on average, a highly talented undergrad would likely become a more effective employee of an EA organization if they spent 2 years ETG at anonymous corporation before they started doing direct work. And if we’re lucky, such experiences outside EA would promote epistemic diversity and reduce the risk of groupthink in EA organizations.
This is because I think that we are not able to evaluate what replacement candidate would fill the role if the employed EA had done e2g.
Idk I feel like you can get a decent sense of this from running hiring rounds with lots of work tests. I think many talented EAs are looking for EA jobs, but often it’s a question of “fit” over just raw competence.
> My understanding is that many non-EA jobs provide useful knowledge and skills that are underrepresented in current EA organizations, albeit my impression is that this is improving as EA organizations professionalize
This seems plausible, though I personally think it’s somewhat overstated on the forum. I agree that more EAs should be “skill maxing” over direct work or e2g, but I don’t think we should use e2g as a shorthand for optimising for developing valuable skills in the short term.
I think many talented EAs are looking for EA jobs, but often it’s a question of “fit” over just raw competence.
For the significant majority of EAs, does there exist an “EA job” that is a sufficiently good fit as to be superior to the individual’s EtG alternative? To count, the job needs to be practically obtainable (e.g., the job is funded, the would-be worker can get it, the would-be worker does not have personal characteristics or situations that prevent them from accepting the job or doing it well).
I would find it at least mildly surprising for the closeness of fit between the personal characteristics of the EA population and the jobs available to be that tight.[1]
For most social movements, funding only allows a small percentage of the potentially-interested population to secure employment in the movement (such as clergy or other religious workers in a religious movement. So they do not face this sort of question. But I’d be skeptical that (e.g.) 85% of pretty religious people are well-suited to work as clergy or in other religious occupations.
I don’t understand why this is relevant to the question of whether there are enough people doing e2g. Clearly there are many useful direct impact or skill building jobs that aren’t at ea orgs. E.g. working as a congressional staffer.
I wouldn’t find it surprising at all if most EAs are a good fit for good non e2g roles. In fact, earning a lot of money is quite hard, I expect most people won’t be a very good fit for it.
I think we’re talking past each other when we say “ea job”, but if you mean job at an ea org I’d agree there aren’t enough roles for everyone, but most useful direct work/skill building roles aren’t at ea orgs so it doesn’t seem very relevant, and if you mean directly impactful job or useful for skill building your claim seems wrong, seems like there are many jobs that will be better fits for people than e2g motivated ones (imo).
I agree that we shouldn’t use e2g as a shorthand for skillmaxing.
I am less optimistic about the ‘fit’ vs raw competence point. It’s not clear to me that a good fit for the work position can easily be gleaned by work tests—a very competent person may be able to acquire that ‘fit’ within a few weeks on the job, for example, once they have more context for the kind of work the organization wants. So even if the candidates at the point of hiring looked very different, their comparison may differ unless we imagine both in an applied job context, having learned things they did not know at the time of hiring.
I am more broadly worried about ‘fit’ in EA hiring contexts, because as opposed to markers of raw competence, ‘fit’ provides a lot of flexibility for selecting traits that are relatively tangential to work performance and often unreliable. For example, value-fit might select for hiring likeminded folks who have read the same stuff the hiring manager has, and reduce epistemic diversity. A fit for similar research interests reduces epistemic diversity and locks in certain research agendas for a long time. A vibe-fit may select simply for friends and those who have internalized norms. A worktest that is on an explicitly EA project may select for those already more familiar with EA, even if it would be easy for an outsider candidate to pick up on basic EA knowledge quickly if they got the job.
My impression is that overall, EA does have a noticeable suboptimal tendency to hire likeminded folks and folks in overlapping social circles (i.e. friends; friends of friends). Insofar as ‘fit’ makes it easier to justify this tendency internally and externally, I worry that it will lead to suboptimal hiring. I acknowledge we may have very different kinds of ‘fit’ in mind here. I do think the examples I provide above do exist in EA hiring decisions.
I haven’t done hiring rounds for EA, so I may be completely wrong—maybe your experience has been that after a few worktests it becomes abundantly clear who the right candidate is.
It feels like if there were more money held by EAs some projects would be much easier:
Lots of animal welfare lobbying
Donating money to the developing world
AI lobbying
Paying people more for work trials
I don’t know if there are some people who are much more suited to earning than to doing direct work. It seems to me they’re quite similar skill sets. But if they’re really sort of at all different, then you should really want quite different people to work on quite different things.
But if they’re really sort of at all different, then you should really want quite different people to work on quite different things.
I agree, but I don’t know why you think people should move from direct work (or skill building) to e2g. Is the argument that the best things require very specialised labour, so on priors, more people should e2g (or raise capital in other ways) than do direct work?
Donating 10 % more of one’s gross earnings to an organisation 10 times as cost-effective as one one could join is 10 (= 0.1*10/0.1) times as impactful as working there if the alternative hire would be 10 % less impactful? If you agree, do you have any thoughts on what is implied by it, and the distribution of cost-effectiveness across the jobs of people replying to the EA Survey?
I think I follow and agree with “spirit” of the reasoning, but don’t think it’s very cruxy. I don’t have cached takes on what it implies for the people replying to the EA survey.
Some general confusions I have that make this exercise hard: * not sure how predictive choice of org to work at is of choice of org to donate to, lots of people I know donate to the org they work at because they think it’s the best, some donate to think they think are less impactful (at least on utilitarian grounds) than the place they work (e.g. see CEA giving season charity recs) - you seem to think that orgs people donate to are better than orgs they work at but Idk if that’s true * a bit confused about the net effects of joining an org on its capital, e.g. lots of hires unlock more funding via fundraising capacity, credibility, etc. * most people earning to give (at least people that I meet) aren’t (imo) salary max-ing (i.e. earning way more than they do in direct work roles). If we were to restrict e2g to the top earners (e.g. stratup founders, AI company employees, lawyers, hedgies etc.) then I think. it’s much easier to consider the hypotehtical—if you buy value drift claims maybe donations from direct workers go up from being surrounded by EAs? * replacement arguments are confusing, it actually matters what the person you would have otherwise hired goes on to do (and so on) * It’s not super clear to me that rough ex-ante impact distributions are extremely skewed like ex-post ones are * I don’t know how to value the effects of collecting information being much easier in direct work than in e2g (hopefully, EA Funds and similar make this a little less important)
I don’t really like my comment here, I feel like I’m pulling away from the actual question but I don’t think a myopic response is very helpful for discourse—the above considerations are actual cruxes for me in the real sense (I could imagine my overall take changing if I changed my mind on them).
* not sure how predictive choice of org to work at is of choice of org to donate to, lots of people I know donate to the org they work at because they think it’s the best, some donate to think they think are less impactful (at least on utilitarian grounds) than the place they work (e.g. see CEA giving season charity recs) - you seem to think that orgs people donate to are better than orgs they work at but Idk if that’s true
I am assuming people would donate to organisations which are more cost-effective than their own in expectation because donating to ones which are less cost-effective would decrease their impact. This still leaves open the possibility of people donating to their own organisation (or asking to earn less), but they selected this partly for personal fit reasons which do not apply to donations, so I would expect most unbiased people to think there are other organisations which are more cost-effective than their own.
* a bit confused about the net effects of joining an org on its capital, e.g. lots of hires unlock more funding via fundraising capacity, credibility, etc.
Roles unlocking funds should ideally be paid more until the point where increasing earnings by 1 $ only increases funds by 1 $.
Both. I do not have reasons to believe organisations are under or overspending on fundraising. Some organisations say they have a hard time finding people who are a good fit for fundraising (being “talent-constrained”), but I think this only means there are steep diminishing returns on spending more on fundraising by increasing the earnings of possible fundraising roles. It does not mean they are underspending on fundraising. In general, I think it is sensible to at least have a prior expectation that the various activities on which an impact-focussed organisation can spend more money on have similar marginal cost-effectiveness. Otherwise, they would be leaving impact on the table by not moving money from the least to the most cost-effective activities at the margin. At the same time, I expect to find inefficiencies after learning more.
(I wasn’t going to comment, but rn I’m the only person who disagrees)
Some reasons against the current proportion of e2g’ers being too low.
* There aren’t many salient examples of people doing direct work that I want to switch to e2g.
* Doing direct work gives you a lot more exposure to great giving opportunities.
* Many people doing direct work I know wouldn’t earn dramatically more if they switched to e2g.
* Most people doing e2g aren’t doing super ambitious e2g (e.g. earning putting themselves in a position to donate >> $1M/year).
* E2g is often less well optimised for learning useful object-level knowledge and skills than direct work.
* Some EAs were early at AI companies and now have net worths of >> $100M—they will likely spend some of this on EA aligned philanthropy
* There are already billions of dollars in philanthropic capital for EA-aligned projects, and basically all funders I’ve spoken to feel that there aren’t enough very exciting fundable projects—so directionally, I’d feel a bit surprised if fewer people should be following paths that are less optimised for directly working on exciting projects.
Otoh, if someone has a very small chance of donating as much as Dustin Moskovitz did, then it’s very plausible they should do that—I certainly wouldn’t discourage people from earning to give if they are succeeding at it.
This is a cool list. I am unsure if this one is very useful:
* There aren’t many salient examples of people doing direct work that I want to switch to e2g.
This is because I think that we are not able to evaluate what replacement candidate would fill the role if the employed EA had done e2g. My understanding is that many extremely talented EAs are having trouble finding jobs within EA, and that many of them are capable of work at the quality that current EA employees do.
This reason I think bites both ways:
* E2g is often less well optimised for learning useful object-level knowledge and skills than direct work.
My understanding is that many non-EA jobs provide useful knowledge and skills that are underrepresented in current EA organizations, albeit my impression is that this is improving as EA organizations professionalize. For example, I wouldn’t be surprised if on average, a highly talented undergrad would likely become a more effective employee of an EA organization if they spent 2 years ETG at anonymous corporation before they started doing direct work. And if we’re lucky, such experiences outside EA would promote epistemic diversity and reduce the risk of groupthink in EA organizations.
Idk I feel like you can get a decent sense of this from running hiring rounds with lots of work tests. I think many talented EAs are looking for EA jobs, but often it’s a question of “fit” over just raw competence.
> My understanding is that many non-EA jobs provide useful knowledge and skills that are underrepresented in current EA organizations, albeit my impression is that this is improving as EA organizations professionalize
This seems plausible, though I personally think it’s somewhat overstated on the forum. I agree that more EAs should be “skill maxing” over direct work or e2g, but I don’t think we should use e2g as a shorthand for optimising for developing valuable skills in the short term.
For the significant majority of EAs, does there exist an “EA job” that is a sufficiently good fit as to be superior to the individual’s EtG alternative? To count, the job needs to be practically obtainable (e.g., the job is funded, the would-be worker can get it, the would-be worker does not have personal characteristics or situations that prevent them from accepting the job or doing it well).
I would find it at least mildly surprising for the closeness of fit between the personal characteristics of the EA population and the jobs available to be that tight.[1]
For most social movements, funding only allows a small percentage of the potentially-interested population to secure employment in the movement (such as clergy or other religious workers in a religious movement. So they do not face this sort of question. But I’d be skeptical that (e.g.) 85% of pretty religious people are well-suited to work as clergy or in other religious occupations.
I don’t understand why this is relevant to the question of whether there are enough people doing e2g. Clearly there are many useful direct impact or skill building jobs that aren’t at ea orgs. E.g. working as a congressional staffer.
I wouldn’t find it surprising at all if most EAs are a good fit for good non e2g roles. In fact, earning a lot of money is quite hard, I expect most people won’t be a very good fit for it.
I think we’re talking past each other when we say “ea job”, but if you mean job at an ea org I’d agree there aren’t enough roles for everyone, but most useful direct work/skill building roles aren’t at ea orgs so it doesn’t seem very relevant, and if you mean directly impactful job or useful for skill building your claim seems wrong, seems like there are many jobs that will be better fits for people than e2g motivated ones (imo).
I agree that we shouldn’t use e2g as a shorthand for skillmaxing.
I am less optimistic about the ‘fit’ vs raw competence point. It’s not clear to me that a good fit for the work position can easily be gleaned by work tests—a very competent person may be able to acquire that ‘fit’ within a few weeks on the job, for example, once they have more context for the kind of work the organization wants. So even if the candidates at the point of hiring looked very different, their comparison may differ unless we imagine both in an applied job context, having learned things they did not know at the time of hiring.
I am more broadly worried about ‘fit’ in EA hiring contexts, because as opposed to markers of raw competence, ‘fit’ provides a lot of flexibility for selecting traits that are relatively tangential to work performance and often unreliable. For example, value-fit might select for hiring likeminded folks who have read the same stuff the hiring manager has, and reduce epistemic diversity. A fit for similar research interests reduces epistemic diversity and locks in certain research agendas for a long time. A vibe-fit may select simply for friends and those who have internalized norms. A worktest that is on an explicitly EA project may select for those already more familiar with EA, even if it would be easy for an outsider candidate to pick up on basic EA knowledge quickly if they got the job.
My impression is that overall, EA does have a noticeable suboptimal tendency to hire likeminded folks and folks in overlapping social circles (i.e. friends; friends of friends). Insofar as ‘fit’ makes it easier to justify this tendency internally and externally, I worry that it will lead to suboptimal hiring. I acknowledge we may have very different kinds of ‘fit’ in mind here. I do think the examples I provide above do exist in EA hiring decisions.
I haven’t done hiring rounds for EA, so I may be completely wrong—maybe your experience has been that after a few worktests it becomes abundantly clear who the right candidate is.
It feels like if there were more money held by EAs some projects would be much easier:
Lots of animal welfare lobbying
Donating money to the developing world
AI lobbying
Paying people more for work trials
I don’t know if there are some people who are much more suited to earning than to doing direct work. It seems to me they’re quite similar skill sets. But if they’re really sort of at all different, then you should really want quite different people to work on quite different things.
I agree, but I don’t know why you think people should move from direct work (or skill building) to e2g. Is the argument that the best things require very specialised labour, so on priors, more people should e2g (or raise capital in other ways) than do direct work?
Hi Caleb,
Donating 10 % more of one’s gross earnings to an organisation 10 times as cost-effective as one one could join is 10 (= 0.1*10/0.1) times as impactful as working there if the alternative hire would be 10 % less impactful? If you agree, do you have any thoughts on what is implied by it, and the distribution of cost-effectiveness across the jobs of people replying to the EA Survey?
I think I follow and agree with “spirit” of the reasoning, but don’t think it’s very cruxy. I don’t have cached takes on what it implies for the people replying to the EA survey.
Some general confusions I have that make this exercise hard:
* not sure how predictive choice of org to work at is of choice of org to donate to, lots of people I know donate to the org they work at because they think it’s the best, some donate to think they think are less impactful (at least on utilitarian grounds) than the place they work (e.g. see CEA giving season charity recs) - you seem to think that orgs people donate to are better than orgs they work at but Idk if that’s true
* a bit confused about the net effects of joining an org on its capital, e.g. lots of hires unlock more funding via fundraising capacity, credibility, etc.
* most people earning to give (at least people that I meet) aren’t (imo) salary max-ing (i.e. earning way more than they do in direct work roles). If we were to restrict e2g to the top earners (e.g. stratup founders, AI company employees, lawyers, hedgies etc.) then I think. it’s much easier to consider the hypotehtical—if you buy value drift claims maybe donations from direct workers go up from being surrounded by EAs?
* replacement arguments are confusing, it actually matters what the person you would have otherwise hired goes on to do (and so on)
* It’s not super clear to me that rough ex-ante impact distributions are extremely skewed like ex-post ones are
* I don’t know how to value the effects of collecting information being much easier in direct work than in e2g (hopefully, EA Funds and similar make this a little less important)
I don’t really like my comment here, I feel like I’m pulling away from the actual question but I don’t think a myopic response is very helpful for discourse—the above considerations are actual cruxes for me in the real sense (I could imagine my overall take changing if I changed my mind on them).
Thanks for the good points, Caleb.
I am assuming people would donate to organisations which are more cost-effective than their own in expectation because donating to ones which are less cost-effective would decrease their impact. This still leaves open the possibility of people donating to their own organisation (or asking to earn less), but they selected this partly for personal fit reasons which do not apply to donations, so I would expect most unbiased people to think there are other organisations which are more cost-effective than their own.
Roles unlocking funds should ideally be paid more until the point where increasing earnings by 1 $ only increases funds by 1 $.
Do you think in real life that’s a sensible expectation, or are you saying that’s how you wish it worked?
Both. I do not have reasons to believe organisations are under or overspending on fundraising. Some organisations say they have a hard time finding people who are a good fit for fundraising (being “talent-constrained”), but I think this only means there are steep diminishing returns on spending more on fundraising by increasing the earnings of possible fundraising roles. It does not mean they are underspending on fundraising. In general, I think it is sensible to at least have a prior expectation that the various activities on which an impact-focussed organisation can spend more money on have similar marginal cost-effectiveness. Otherwise, they would be leaving impact on the table by not moving money from the least to the most cost-effective activities at the margin. At the same time, I expect to find inefficiencies after learning more.
I would argue that this work was highly net-negative, possibly so bad as to offset all the positive benefits of EA.