Peter Hurford thinks that a large proportion of people should earn to give long term
Recently, 80,000 Hours wrote that they think only a small proportion of people should earn to give long term. They asked their team “At this point in time, and on the margin, what portion of altruistically motivated graduates from a good university, who are open to pursuing any career path, should aim to earn to give in the long term?” and got a median answer of 15%.
I’m not confident in my response, but when thinking about the movement as a whole, I’d suggest a ratio closer to 50%, if not as high as 80%.
The ratio of donation opportunity generated per staff of EA orgs suggests a need for a higher-earning to give ratio.
This consideration is definitely more speculative, but depending on how you look at the numbers, it’s possible to justify an earning-to-give ratio above 29% and possibly as high as 99%.
The Size of the Opportunity
The biggest example of generating a lot of donating opportunity per staff member is the Against Malaria Foundation. They only havetwo full-time staff, yet according to GiveWell their 2015 room for more funding was around $5M. This suggests that a very high talent EA direct worker could generate as much as $2.5M for earning-to-give people to attempt to fill with donations.
Give Directly has 25 staff and has a room for more funding of somewhere between $1M to $25M (with the potential for much more), which is somewhere between $40K to >$1M per staff member.
How Much Do People Earn to Give?
80K found in their 2014 survey that people doing earning-to-give were donating on average $13K/year for 2013, with an expectation to rise to $56K/year within three years if people’s plans are taken at face value. A follow-up in 2015 found a new average of $16.5K/year for 2014, though for a more limited sample.
According to the EA Survey (focused on 2013 data), the mean donation in our sample from EAs who met the criteria for earning-to-give (>=$60K annual income and >=10% donations) was $9.5K/year.
Now Let’s Do Math
Using the best possible numbers for earning-to-give, imagine that an additional direct work person was like the AMF staff and generated $2.5M in donation opportunity. Now imagine I took the low value of earning-to-give, $9.5K/year. Using these two numbers, it would take 263 people earning to give to match one person doing direct work, suggesting that 99.6% of people should earn-to-give.
If instead I look at GiveDirectly’s low number of $40K per staff member compared to a high value of earning-to-give ($56K/year), I calculate you need 1 person earning to give for every 1.4 people doing direct work, suggesting a ratio of 29%.
The Problem With Funding Salaries
Of course, it does seem clear that marginal direct workers won’t be as productive at creating earning-to-give opportunity as AMF staff have been, so there will be less to fund per marginal direct worker.
Furthermore, AMF may not be typical of charities marginal direct workers are moving into. Charities like CEA or MIRI scale not by creating opportunities to fund programs, but rather through hiring staff, and this ends up with a lower amount of donation opportunity to staff member (equal to their salary).
Still, even if we expect earning-to-give people to be primarily focused on funding salaries, we may need more than 15%. Consider the classic story of ETG who takes a job in finance to fund two direct workers, doubling their impact. First, if salaries + overhead per staff are ~$50K/yr (though they can be much lower), a 15% earning to give ratio would require the average earning to give donation to be over $330K, which seems unrealistically high, even for the next three years. A more realistic ratio where the average (not median) earning-to-give person donates $50K/year (still more than is currently happening) would be 1:1, or 50%.
Second, the AMF model should not be ignored when thinking about donation opportunity created per marginal direct worker, and it’s quite possible that in some scenarios we may need many earning-to-give people to support the opportunities created. It’s an open question what proportion of EA resources should be going to GiveWell’s scalable health charities, which often generate millions of dollars in donation opportunity per employee, than other EA orgs where the donation opportunity per employee is quite less. Those who think the majority of EA funding should focus on GiveWell’s top charities, or that GiveWell top charities should be the default giving opportunity for the typical donor absent good reason to donate elsewhere, should also be far more inclined to think that more people should earn-to-give.
EA Orgs Have More Room For More Funding Than People Are Giving Credit For
Recently I’ve been involved in some EA projects that I’ve wanted to fundraise for. On three occasions with three separate EAs the reaction was always “Huh, I’m really surprised that hasn’t been funded already.” …And many of those projects still need money.
There is supposed to be an explosion in earning to give. But I don’t think it’s arrived quite yet. From my point of view many plausibly important projects still want more funding.
Just to mention a few projects I’m aware of --
-
According to Jacy’s speech at EA Global, Animal Charity Evaluators is quite funding constrained and would hire more researchers if they had the money.
-
The Machine Intelligence Research Institute believes they could have as much as $5.5M in room for more funding according to their fundraiser (though I don’t know if they think of themselves as funding constrained or not).
-
Charity Science feels talent constrained but also could really use more money for planned expansions we can’t afford yet.
-
Some people close to GBS Switzerland told me they feel funding constrained.
-
Several projects currently going through EA Ventures have not yet raised the funding that they wanted.
-
And, of course, GiveWell top charities have $500K to $23.4M in room for more funding according to the latest analysis.
-
And other projects I’m aware of are on the horizon and may make more public pitches soon.
...Of course, it’s possible you may not believe in supporting these projects. If so, depending on what you want to support, it’s possible the remaining room for more funding may be quite small. But the belief I’ve been hearing from some that all EA projects are getting fully funded all the time and that EA Ventures will simply take care of the rest is not currently true.
Earning to Give May Have Better Career Capital
Career capital, as 80,000 Hours defines it, are the skills, credentials, connections, and other benefits you get from a career that help you have more impact in the future. Since what matters is your total impact and not your current impact, it’s quite plausible that young people (including myself) should be focusing much more on career capital than having an impact now.
Earning to give careers usually have good career capital opportunities. Usually the most important skills (80K suggests computer programming, web development, statistics, machine learning, independent work, self-motivation, sales, communication, and management) are typically found in abundance in good for-profit careers that also happen to have good earning to give potential.
Yes, you can certainly find this capital in non-ETG careers, and I want to be careful to not fall into a charity vs. for-profit ETG dichotomy when many additional options exist (e.g., research, academia, and politics). But I generally believe that it is easiest to find career capital within earning-to-give careers and it becomes much easier to transition out of earning to give and into other careers than vice versa (with maybe the exception of academia).
Earning to Give Fits More People
Furthermore, psychologically, earning-to-give seems to me to be a better fit for the average EA than direct work. Many EAs are already working in a company and can simply move to donate more of their salary or focus on increasing their salary, rather than quit their job and start a new one. Furthermore, EA direct work is frequently concentrated in certain cities that require relocation, which can be a big choice.
Another advantage to earning to give is it’s often easier to accomplish for people who have less altruistic motivation or direct work talent. Of course, this certainly misses 80K’s point because they were talking about people on the margin and people who were high-talent and high-motivation who were open to many career paths. That is a different question. But in the actual movement, it’s important to note that many people are not in the top 0.1% of drive and talent. Yet they still have a valuable thing to contribute—a share of their job through earning to give. It’s often much easier to get, keep, and do an earning-to-give job than it is to do EA direct work.
Lastly, earning to give frequently gives much more career capital than other jobs early on in one’s career, which would allow for people to launch more successful non-ETG careers in the future. It’s generally a lot easier to move from earning to give to not earning than vice versa, given constraints on personal savings and on various industries.
Conclusion
Obviously the choice between earning to give or something else is a personal one that is sensitive to your personal skills and fit. But I think if you have the ability to enter a particularly high-earning career, there are good reasons to do so, and more people should be considering it than the current wisdom seems to suggest.
There are certainly good reasons to think 80K’s argument is correct—there is already a lot of money in EA through Good Ventures and other high net-worth individuals and there is a potential that existing earning-to-give EAs may start donating much more soon. I’m excited how much the EA movement might grow through more direct work. However, I think we are still a ways off until earning-to-give people start reliably funding the remaining 85% of the movement.
But until things change, I’d suggest that 15% seems to me to be quite small, and I’d suggest an earning-to-give ratio at 50% or even as high as 80% for the movement as a whole.
- Announcing A Volunteer Research Team at EA Israel! by 18 Jan 2020 17:55 UTC; 28 points) (
- 9 Mar 2016 1:42 UTC; 1 point) 's comment on The Effective Altruism Newsletter & Open Thread – March 2016 Edition by (
Thanks for this; I agree with much that is said.
My numbers are a little different to yours at around 25-50%. I think the reason is that I envision many EAs going into ‘influence’ areas like foundations, other NGOs, politics, academia, journalism, and policy. Those that do probably won’t earn large sums but neither will they be drawing substantial funding from the EA community; they are sort of excluded from both sides the earning-to-givers vs. direct-workers analysis you do above. When I do a similar analysis, I get a similar conclusion of wanting very roughly 1 ETG person for each 1 direct EA org person, but then I envisage 0 − 50% of EAs doing the orthgonal ‘influence’ options, hence 25-50% in both ETG and ‘direct’ work.
Part of the disagreement here is also surely how narrowly you define ‘Earning to Give’; I think 80k probably meant something much stronger than “>=$60K annual income and >=10% donations”; more like >$100k gross annual income and >=50% donations. I think both definitions have merit, it’s just worth clarifying from the outset what you are talking about.
Edit: I would also add that my experience of funding things this year is that we are indeed a few years away from the projected (and I think reasonable to expect) Earning-to-Give explosion. I can’t think of a major cause area that doesn’t currently have both a meta-charity and direct charity constrained by funds. Obviously this is subject to potentially rapid change, but this was a significant update for me so I wanted to share.
That’s really interesting analysis! I hadn’t considered that. But I agree.
This was a significant factor for us. I could easily see a future where it’s best for the majority of EAs go to work in research, international orgs, policy etc.; which already drives the percentage under 50.
I agree with your numbers more than the OP, as well, AGB.
What do you think is the best strategy to account for this possibility? How much should we prepare? I figure plenty of people going into earning to give is fine anyway, since I expect these careers are indeed more likely to build career capital allowing a solid transition out of earning to give and into another career approach in the future if need be. Also, it seems 80,000 Hours now seems to recommend someone enter a career with additional potential for direct impact and/or outsized opportunity for career capital whenever they recommend earning to give anyway.
EA Ventures is a good start as far as preparation goes. I’m not going to make the argument ‘Likely expansion of EtG means many fewer people should do it’ because I think it’s quite weak for the reasons you gave. But I do think that people with the ability to create funding opportunities (which I think is actually a relatively small number of people) should be trying to do so. We could do with more founders and a more diverse/complete set of organisations.
Interesting post, thanks Peter!
I need to think about this issue more, but I think there might be a couple of problems with the estimates.
1) Let’s divide problems into the world into ‘funding constrained’ and ‘talented constrained’. What you’ve done is pick the most funding constrained causes we know (Givedirectly and AMF) and then say “wow these can absorb a lot of funds”, which is not surprising, because they were selected for that property.
But there are other causes where it looks like a talented person could make a big difference but where it’s not easy for money to buy progress. These are causes that are more constrained by innovation, leadership, coordination, and so on. Some areas that might fall in this category include EA movement building, much of research, green energy, much of policy, international relations. We asked Holden to speculate on what they might be here: https://80000hours.org/2014/10/interview-holden-karnofsky-on-cause-selection/
We asked biomedical researchers to estimate how much money they would trade for a researcher with good personal fit, and they often named figures of around $1m per year, more than most people could donate.
Taking talent gaps into account too, it becomes far less clear where the ideal balance lies.
It seems likely the world is more talent constrained than funding constrained, if that question makes sense.
2) The figures for how much the typical etg person will donate might be a big underestimate. You can’t easily infer the long-term average from the 80k surveys because those are surveys of ppl very early in the career—indeed some of the ppl are still at college. Many EAs have long-term earning potential over $1m, so will be donating $100-$500k per year, so your estimate could be out by a factor of 10.
3) You’re comparing the most talented direct workers (Rob Mather) with the typical etger. It would be more fair to compare equally talented people. The people with best fit for earning to give will be able to donate many millions within a couple of years, which is similar to the amount of room for funding created by a staff member at AMF. So that might suggest a 1:1 ratio of etg to direct work.
And if you think of the typical salaries at an EA org (~$50k per year), one talented etger will be able to cover the salaries of ~20 people.
4) The EA movement is pretty small so it seems very achievable to pull in funds from elsewhere, and there’s been a strong track record of doing this e.g. most of SCI’s funding has come from Gates; Thiel funded a bunch of things; CEA has a bunch of external donors.
5) What about value of information? An EA movement where 95% of people etg as software engineers while 5% do direct work is going to have very stunted learning opportunities. I’d prefer to see EAs working in a wide variety of causes and sectors, then sharing what they learn with each other. A similar consideration applies to the EA movement building a wide portfolio of skills so it can address big problems in the future.
6) I’m unsure about career capital. I’m tempted to agree that for the median person etg might normally offer better career capital, but if you’re especially talented it may be better just to focus on doing something impressive in an important cause. https://80000hours.org/2015/07/what-people-miss-about-career-capital-exceptional-achievements/ I also think people underestimate the career capital you get from working at EA orgs. e.g. I think I gained far better career capital from working at 80k than I could have done in finance, and I had good options there.
7) I’m unsure etg fits more people. Bear in mind that the common sense position is that earning to give is bizarre and no-one does it. Whereas loads of people want to work in teaching, nonprofits, research and so on.
Also, if you find it hard to stay altruistically motivated, then it’s probably better to be among lots of other altruists rather than being the only person in your company etg.
I want to push back a bit against point #1 (“Let’s divide problems into ‘funding constrained’ and ‘talent constrained’.) In my experience recruiting for MIRI, these constraints are tightly intertwined. To hire talent, you need money (and to get money, you often need results, which requires talent).
I think the “are they funding constrained or talent constrained?” model is incorrect, and potentially harmful. In the case of MIRI, imagine we’re trying to hire a world-class researcher for $50k/year, and can’t find one. Are we talent constrained, or funding constrained? (Our actual researcher salaries are higher than this, but they weren’t last year, and they still aren’t anywhere near competitive with industry rates.)
Furthermore, there are all sorts of things I could be doing to loosen the talent bottleneck, but only if I knew the money was going to be there. I could be setting up a researcher stewardship program, having seminars run at Berkeley and Stanford, and hiring dedicated recruiting-focused researchers who know the technical work very well and spend a lot of time practicing getting people excited—but I can only do this if I know we’re going to have the money to sustain that program alongside our core research team, and if I know we’re going to have the money to make hires. If we reliably bring in only enough funding to sustain modest growth, I’m going to have a very hard time breaking the talent constraint.
And that’s ignoring the opportunity costs of being under-funded, which I think are substantial. For example, at MIRI there are numerous additional programs we could be setting up, such as a visiting professor + postdoc program, or a separate team that is dedicated to working closely with all the major industry leaders, or a dedicated team that’s taking a different research approach, or any number of other projects that I’d be able to start if I knew the funding would appear. All those things would lead to new and different job openings, letting us draw from a wider pool of talented people (rather than the hyper-narrow pool we currently draw from), and so this too would loosen the talent constraint—but again, only if the funding was there.
Right now, we have more trouble finding top-notch math talent excited about our approach to technical AI alignment problems than we have raising money, but don’t let this fool you—the talent constraint would be much, much easier to address with more money, and there are many things we aren’t doing (for lack of funding) that I think would be high impact.
I agree many things are both talent and constrained and funding constrained.
I think you can have the whole spectrum from mainly constrained by a certain type of talent, to constrained by both, to mainly constrained by funding.
Ben, between your comments these ones I made, and AGB’s comments above, I’m thinking of writing not a direct rebuttal to Peter Hurford’s estimates of ideal proportion of ETG:direct-work, but a post called “When Should You Go Into Direct Work”, which would be a list of heuristic considerations of when someone should consider going into direct work vs. earning to give. I think it’s important to make a visible response to Peter undoing a potential misconception earning to give is a better fit than some kind of direct work than a few disparate comments we’ve made. I especially think your points 3 and 5-7 are important considerations for individuals EAs making career choices, significant plan changes, etc.
Would you like to read or comment on the draft of such a post when it’s available?
A couple points (with opposite signs):
Why didn’t you mention GiveDirectly [ETA: in the ‘orgs with room for more funding’ section], an organization with nigh-boundless room for more funding? It just took $25MM from Good Ventures, has a history of extremely rapid growth, and its model should eventually allow it to take many billions of dollars per year.
Also, contrast earning to give with other paths to influence large quantities of funds, e.g. working at a large foundation, at IARPA, or in a government aid bureaucracy. The average money moved in the relevant roles in those fields looks a lot larger than for earning to give.
One concern I have with working at a foundation is I don’t know how feasible it would be to move large amounts of money to more “out there” causes like x-risk which are plausibly the most important causes. This would surely be easier at some foundations than others but I don’t know if it would be feasible at any large foundation that isn’t already making decisions about cause selection.
Did you already see our career profile on it?
https://80000hours.org/career-guide/top-careers/profiles/foundation-program-manager/
Thanks Ben, I hadn’t seen that. I’ll give it a read!
First of all, before the bulk of my response, there are globally catastrophic or existential risks which will seem less “out there” than others. I think most laypeople will respond better intellectually and emotionally to mitigating the chances of a pandemic, global food insecurity, or the tail-risks of climate change than A.I. catastrophe or geomagnetic storms. As foundations like the Open Philanthropy Project both find more and better opportunities for grants to mitigate GCRs, and also normalize granting to these causes in coming years, it may become (much) easier for the marginal effective altruist at a foundation to make grants in this direction. Anyway...
I agree it won’t be as feasible to move money to “out there” causes at some foundations, but I think an effective altruist should take what they can get. I mean, we shouldn’t literally be as blunt in our decision-making as that, but I’ll give you an example.
Let’s imagine a student at Oxford University named Mary has made a significant plan change because of 80,000 Hours, and she intend to go work at a foundation in an effort to influence where the funds go. Because of her strengths and connections, Mary has great fit and opportunity to make it in foundation work. She also studies Philosophy, Politics, and Economics, a major which poises her to do foundation work in the U.K. better than, say, a major in Psychology or Chemistry. However, because of her interaction with effective altruism, tMary has decided her personal priority is mitigating A.I. risk, even though almost any foundation which would hire her would at best let her make grants to AMF or GiveDirectly. Should Mary still aim to work at a foundation?
I think there’s still a case to be made. What seems a dilemma may be two opportunities in disguise. By working at a foundation making grants to AMF, Mary is having the greatest impact she could have for not just global poverty, but any cause, by doing direct work. If we assume she considered either earning to give or direct research on the value alignment problem, and still concluded the best fit for her was grantmaking in foundations, I don’t think this new development changes her comparative advantage. So, she can take a job at that foundation. Meanwhile, if she earns enough, she can still donate to MIRI on the side, as a form of indirect impact. This combination of choices ensures she’s still having the greatest impact she could expect to have at the beginning of her career. As she climbs the ladder, builds career capital, and gains a reputation, Mary puts herself in a position to make grants direclty to “out-there” causes at another foundation, if that ever becomes a future possibility.
Now, I don’t think this example can be used to justify what advice is given to effective altruists in general. However, when talking about career selection, the inside view can matter as much for an individual effective altruist making choices for themself as much as the outside view matters to advising the marginal EA in the abstract. As someone currently agnostic between causes due as much to my disposition to being indecisive as much as to my real uncertainty, an indecisiveness which challenges my career selection as well, I’d be ecstatic at an opportunity like the one I’ve devised for Mary. To have confidence that I don’t need to make an ultimate cause selection between two overwhelming options before I start having a leveraged impact would make an EA career psychologically easier for me. I doubt I’m the only effective altruist who feels this way.
I did mention GiveDirectly in the post, but I wrote the draft before the $25M announcement and underestimated the upside.
-
Yep, that’s a good point. I think my arguments apply more toward the balance of doing money moving (e.g., earning to give, foundations, IARPA, etc.) versus direct work (e.g., working at CEA, doing research, etc.), though this is not a perfect dichotomy.
A neglected but important related question is: ‘What proportion of people doing ‘money moving’ should earn to give?′
Hey this is a great discussion to have so I’m really glad you posted it. You haven’t changed my views, and I don’t have time right now to go into details, and I haven’t read the comments yet, but I just wanted to raise a couple of points where you think we disagree where we don’t. Note the question we answered:
“At this point in time, and on the margin, what portion of altruistically motivated graduates from a good university, who are open to pursuing any career path, should aim to earn to give in the long term?”
“Long term” in that question is bracketing out the ‘career capital’ argument for EtG which you discuss above. I believe that a higher proportion of ppl should EtG short term than should EtG long term bc of the career capital benefits. (And I think say something similar in the OP).
“open to pursuing any career path” is bracketing out the following consideration: “psychologically, earning-to-give seems to me to be a better fit for the average EA than direct work”. If we were just asking “what % of the EA community should (in a sense of ‘should’ that takes into account people’s psychologies, etc) EtG?” and ran the survey among the 80k team again, I suspect the number would be higher than 15%. (And again, I thought in the OP I mentioned this as an argument for non-EtG; there are many people who are going to EtG whatever happens, so if you’re happy not-EtG that’s a reason in favour of not-EtG).
So I’m wondering what % you’d give in answer to the question we were asking, given clarifications 1 and 2? I’m worried there’s some miscommunication bc you seemed to be answering “What % of the EA community should EtG at any one time” and we were answering a narrower q? (I don’t think we’ll have the same view, but it might be closer)
On the subject of non-disagreements, can I make another ping about the probable large difference in definitions of Earning-to-Give? Peter did give a definition “According to the EA Survey (focused on 2013 data), the mean donation in our sample from EAs who met the criteria for earning-to-give (>=$60K annual income and >=10% donations)...”. There isn’t one in the OP.
Or to put it more pointedly, would I be right in guessing that if we define everyone earning >$60k and donating >10% as Earning to Give, you think more than 15% of people open to any career path should be doing that long term?
If I were defining it again, I’d further refine it to have the third criteria “and with the intent that the majority of one’s impact is coming from donations”. For example if one earns $60K at a company working on improving developing world infrastructure, it sounds like something different than what I’d consider making a big difference through donations.
I wrote my post knowing we’d be talking past each other some—I wanted to emphasize career capital and psychological fit even knowing that it they were being bracketed out by your carefully worded question. Sorry that makes things confusing!
-
It’s difficult to make even a rough guess about the “long term” future of EA (say >5 years) and I don’t think that such a rough guess is all that valuable when switching out of ETG to something more “direct” is usually pretty easy.
The % is also further complicated in the thought that other people raised that I did consider buy not sufficiently—careers that involve direct impact without requiring funding from EAs (e.g., academics).
On one hand, if more foundations like Good Ventures continue to enter at the current rate they have and our current ETG people don’t value drift and have incomes rise as they think they will, ETG will be getting less valuable. On the other hand, if funding opportunities continue to grow rapidly, especially from The Open Philanthropy Project, ETG will be getting more valuable. I’m not clear on which one of these trends will dominate. I don’t even know which trend is currently winning, though I suspect the first one (making ETG less valuable over time).
That’s why I wanted to focus more short-term.
In general, in “talent constraint vs funding constraint” discussions I find it super important to be clear on exactly what q is being asked as it’s easy to talk past one another.
I agree with your conclusions here. A few weeks ago at Stanford EA we discussed career alternatives to earning to give, and took a somewhat different approach from you here. We threw out a number of ideas about careers we personally could pursue and how valuable we thought they were. We more or less reached a consensus that we could all do more good by earning to give than by doing anything else. This may have been more true for the people present than for EAs in general, but even so I suspect it’s still the case that 50+% of EAs should be earning to give.
As you know, I endorse your position, and think that in the ideal distribution—the one in which all of those not earning to give are doing the most valuable things—even more than 80% of people would be ETG. (More precisely, they’d be doing good primarily by donating, as this is the real issue here, not whether they do ETG in the sense of taking high-paying jobs primarily in order to donate.)
Tom, there is potential for effective altruism to expand in multiple ways.
It could grow exponentially in the absolute number of people who join the movement and pursue the most effective careers they can.
It could grow exponetially in terms of money moved to effective charities, e.g., by Good Ventures, the amount of influence it wields, or the amount of projects it’s responsible for initiating.
It’s possible there will be a great increase in the number of effective giving opportunities to existing or yet unfounded organizations and projects. Or, only the amount of money moved to existing effective charities might substantially increase, creating or exacerbating funding constraints. Or, both. How would your ideal distribution of EtG relative to other EA work change under such scenarios?
Changes like that would absolutely change my ideal distribution, in the ways that you’d predict. :) I’m just sceptical that some of them will in fact happen—e.g. that we’ll develop many GiveWell-beating donation targets, able to absorb a lot of money before capping out. I’m one of the people who Peter mentioned as favouring direct poverty relief—and there are an awful lot of poor people out there.
Yeah, I think these changes are unlikely, I was just trying to test your thoughts on the subject. I believe there likelihood is high enough it should be something in the back of our minds in case we need to quickly change our plans, but not so likely we need to take focus away from what we’re currently doing to make new plans, until we receive real evidence such dramatic changes will indeed happen.
For the record, for all values of “Givewell-beating donation target”, whether recommended traditional charity, or narrow space for funding for an Open Phil consideration as an incredible opportunity, I expect most interventions a consensus of effective altruist would agree, e.g., beat AMF, would only beat AMF for a few months, basically so they receive enough funding to sustain an experiment to see if such a new initiative would work and be scalable. Once they receive seed funding, they wouldn’t be worth funding again at least until results confirm it’s a valuable investment, so they’d hit sharply diminishing marginal returns.
This is assuming we’re judging the value of a cause or intervention with only conventional measures, like expected or demonstrable number of QALYs, and not other things like the “important/valuable, crowded/neglected, tractable” heuristic, e.g., Open Phil uses. I personally still don’t know what I would conclude the output of that analysis would be.
As always your posts are very clear, constructive and go straight for the key points!
Here’s why I don’t agree, and am not much moved from my original estimate:
Firstly, as you note, the claim was only about the most motivated/talented people, in the long run. The first point was there to deal with the fact that many people who are less motivated will find it much easier to earn to give than the alternatives. The second is there to deal with the career capital point—that many people should earn to give early on, but transition out later on to have a direct impact. So inasmuch as we are addressing different audiences, we don’t actually disagree as much as it might seem. That so many people who are not as flexible will prefer earning to give, is a reason to do direct work if you’re open to both.
The post neglects that we can get enormous sums of money from outside pre-existing sources, for example, Good Ventures. This could end up covering many of the costs for people doing direct work, and dramatically reduce the need for earning to give. So probably our estimates should be a wide range depending on how that goes.
Earnings are log-normal, so the average donations per earning to giver are much higher than the typical cases you mention. Particularly so as many people are going into entrepreneurship, which allows you to make either a lot of money in your first 10 years, or switch to direct work. (Also note there is something peculiar about the argument that each person who earns to give doesn’t donate much money, so that’s why more people should do it.)
I don’t think AMF or GiveDirectly are likely to continue to be regarded as the most effective organisations in the long run, so although they have exceptional spends per staff member, I anticipate that the places I would want to move money to will have many more staff for each dollar they spend.
Lots of promising opportunities won’t require any earning to give to support them—politics, science researchers, academics, working in a foundation, journalists, activists, profitable start-ups that are directly valuable, etc. To me that’s already where I would want at least a third of the people we were talking about to go.
A big thing this seems to be missing is that there are other sources of money than “EAs earning to give”. Philanthropists and foundations could easily fill GiveWell’s charities’ room for more funding.
(I had taken a similar approach a while ago, and no longer think that’s the right comparison to make.)
You’re right that stacking up earning to give count vs. direct opportunity count is an instable argument. However, it’s important to not just assume foundations are rushing in to fill these funding gaps. (Not saying you are making that assumption, of course.)
Right. It’s not that philanthropists and foundations are already spending their money optimally, but that because it’s already there it makes sense to have people working on getting it spent better.
I noticed that I’m confused about this argument because it implies that the worse earning to give is, the more people should do it.
Could you explain more why these are the correct things to compare? I get and agree with your second comparison where you compare salaries.
“I noticed that I’m confused about this argument because it implies that the worse earning to give is, the more people should do it.”
**
Some hopefully more intuitive analogies:
When automation dramatically ups the productivity of a single worker in a job, a common result is that fewer people are needed to do the job (people get laid off, sometimes they strike, etc.)
The huge increase in productivity of farmers means that whereas at one point probably 80%+ of the working population was needed in agriculture to produce enough food, now it’s <10%.
If I’m earning barely enough to live on and then living costs outpace wage rises (i.e. my real wage falls), I will probably work more hours.
If I’m cooking, I will likely add much less of an ingredient with a very strong flavour than one with a relatively weak flavour.
**
Peter is looking at a deliberately simplified analysis where all people are either money-producers or money-consumers. If there are too many producers or consumers, the marginal producer/consumer isn’t actually increasing the amount of money that gets moved (we are either opportunity-constrained or funding-constrained). We want a rough balance, and so the worse the producers are at producing, the more of them we need, and vice-versa.
Thanks, I agree that this is helpful and explains his 2nd example where he is comparing the salary of an AMF person to what an E2G person donates, but I still don’t understand the example I quoted where he is comparing the “donation opportunity” from AMF staff versus E2G.
(To use the terms of your last paragraph, it doesn’t seem like this is comparing producers and consumers but rather 2 different types of producers.)
I agree it’s not intuitive when you put it that way, but I think it makes sense:
Imagine a strange world where the highest impact thing to do is to turn this magical crank. Also forget about meta opportunities—there are only two possible roles: turning the crank yourself (direct work) or funding the salaries of people who turn the crank (earning to give).
Forgetting a moment about psychological constraints and thinking only about pure impact, you ought to turn the crank yourself, because the more people turning the crank the more impact there is, and earning money itself doesn’t turn any cranks.
However, the people turning the crank need to not starve to death, and they’re requesting a frugal $25K/year to fund their lifestyles.
Currently there’s $0 in this crank movement (two different puns intended), so we need an ETG person to fund some salaries.
Now imagine that the highest salary you can get is $50K/yr—you take $25K/year for yourself and can fund one person full-time turning the crank. So for every one person turning the crank, you need one person ETG, or else nothing would happen. The next person to join the crank movement will do ETG, the person after that to join will turn the crank directly, and so on.
Now imagine that someone gets a superjob and can earn $50M/yr—they also take $25K/year for themselves, but they have $49,975,000 left over for donations; enough to fund 1999 people to turn the crank. Now the next 1999 people to join should turn the crank directly, because we don’t need any more money. None of the next 1999 people should earn to give.
When earning to give was worse, we needed more people doing it. When it got better, we needed much less people doing it. Thus, the better earning to give is, the less you should do it, generally speaking.
Does that make sense?
Thanks Peter, I agree that your insight about “when it’s worse more people should do it” is correct and your analogy is helpful.
But what really confuses me is the specific example I quoted where you are looking at what an AMF staff person does versus what an E2G person does. It seems like in both cases you are comparing “donation opportunity”, yet you are choosing the one which is worse.
An AMF staff person is like a crank turner in my example—it doesn’t matter how much donation opportunity you create if there’s no one there to fulfill it.
I think you are agreeing with me? We shouldn’t be comparing the output of the “crank”, but instead be looking at what it takes to turn the crank. Therefore we shouldn’t compare 2.5 M to 9.5 K, instead we should compare the salary of someone at AMF to 9.5 K.
Some things are like this, though it is uncommon. https://en.wikipedia.org/wiki/Giffen_good
It seems like this would depend a lot on how you define EA. If you mean “people who attend EA Global” or even “people who read EA forums”, that’s probably a larger percentage who should do direct work than “people whose choices we hope will be influenced by EA philosophy”.
I think you could be right, but could you elaborate as to why you believe this?
Upvoted Not to displace Elizabeth, but I hope you don’t mind me taking a crack at this. Note: I’m not trying to make one knockdown argument, but taking a cluster of shots in the dark that might add up to validating Elizabeth’s premise.
At EA Global, relative to people who wil join effective altruism in the future, the collection of attendes, other passionate/dedicated EAs, etc., were referred to as “early adopters”. “EA Global attendees” or “EA Forum participants” are close to the current “core”. This signifies they’re dedicated, and may be more willing to pursue direct work than other effective altruists. Direct work in EA is leads to a less conventional career than earning to give does, so EA organizations should capitalize on the willingness of EAs who would pursue direct work.
The haste consideration might dictate, since is better to do all the best direct work sooner rather than later, it’s better to onboard more EAs into direct work as soon as possible, to realize a more leveraged impact.
I read on one of 80,000 Hours’ recent blog posts that they find it difficult to hire new talent because most potential hires don’t have the mix of “skills, rational insight, and deep knowledge of effective altruism” they’re looking for. This might be the case for many EA organizations. This might be more so if we consider the domain-specific knowledge orgs working on specific causes might require of their employees. Dedicated “early adopters” of effective altruism are disproportionately likely to be great fits for these specifications of EA orgs.
Connections and professional networks constitute an important part of finding talent, learning lessons, and sharing resources in a sector. EA has very different needs than much of the non-profit world, so they’d be better served by building a professional network composed of existing EAs in an easy way, rather than spending more resources and time trying to find the same in the rest of the non-profit sector.
As effective altruism grows, more nuanced and contextualized insight of the movement’s particularities and history will be necessary to maintain its success. For example, take EA Global 2015. Had the organizers known more about last year’s EA Summit, they would’ve been better able to avoid repeat mistakes such as not optimizing for considerations of the meals served, scheduling the conference on the same weekend as the national A.R. conference, and not checking with various cause representaitves about what they thought was an appropriate amount of attention their cause received on the schedule. As time goes on and effective altruism grows, it will be even more crucial for EA to have early adopters at its orgs to pre-empt future problems of this kind.
EA organizations have long-term relationships, such as the relationships between a charity evaluator and its recommended charities. These unique relationships are facilitated by having an especially knowledgeable EA who knows the history of these relationships, rather than hiring a (new) outsider every couple years or so.
As EA orgs specialize in what they do, it’s easier for a more dedicated EA to transition from a very specialized role in direct work to earning to give as needed, rather than a fresher EA transitioning from earning to give to a very specialized role. For example, Givewell finds training employees for their work or into management positions must be careful and slow-going work to ensure it goes well. This whole process is made easier if more dedicated EAs go into direct work sooner rather than later.
More EAs going into direct work in cause prioritization, movement development, or other metacharity may greatly expand the quantity and spread of effective organization who could receive fuding in the future. If many EAs will go on to earn to give anyway, it’s important for us involved now to expand the number of organizations who are prepared to do effective work with these future funds, and ensure they’ll indeed be effective.
“EA has very different needs than much of the non-profit world.” In what way?
I also have to say that there is something very insider-y about this analysis. Much of the advice seems like it boils down to “don’t waste your time with non-EA people.”
Effective altruist organizations do work which is uncommon among other non-profit organizations, such as cause prioritization, charity evaluation, and the explicit growth and coordination of a budding social movement. Much of this might require unique skills, or at least ones that are less common to find among people working at conventional NGOs. So, long-time volunteers for EA oganizations who also have a tacit knowledge of dynamics in effective altruism as a community may be quicker and simpler to train than someone who knows nothing of effective altruism. However, if an organization broadened the scope of its search to talent beyond conventional non-profits and the existing EA community, to anyone and everyone from the public and for-profit sectors as well, they’d likely find unique candidates who fit the bill better than anyone else, effective altruist or not. In the past, it seems finding new hires has been difficult enough for small effective altruist organizations in what they consider an acceptable timeframe they feel forced to hire from within the community. However, now that the scope of effective altruism is expanding, past experience alone shouldn’t stop EA organizations to look beyond their own existing circles of influence to find new hires.
So, I don’t agree with Elizabeth’s original comment. AGB has a well upvoted comment above this thread, and I agree with the ratio of earning to give to other effective altruist work he puts forth would be ideal, based on the current state of things. I think he is more or less correct for however wide a net one casts to define the population of effective altruism, even if it’s one so small it only includes people who post to forums like this one and attend conferences every year. I don’t think the proportion of “early adopters”, or whatever they’re called, of effective altruism who go into direct work should be much higher than the total of whatever couple thousand effective altruists there are.
I was just generating a bunch of possible arguments on the fly for Elizabeth’s hypothesis, so I might have motivated myself to produce ones which on their face seem appealing but contain little substance. Like, I was putting myself in the shoes of an EA organization which was desperate to hire the most fitting employees for their team as soon as possible. Most organizations don’t act that dire. On second thought, I think only three of my above points stand up to scrutiny. There was another thread where Tom Ash answered one of my questions that’s made me more skeptical of the capacity of effective altruism to generate new superior giving opportunities in the form of new projects or charities than I once thought. So, there’s likely less capacity for direct work.
If the rest of the effective altruism community does and continues to hold the opinion they can produce many new projects which beat, e.g., Givewell’s top charity recommendations in terms of effectiveness, more of them should be allowed to fail, as we would rightly expect would happen, and we should not keep funding them as that would be a bunch of bloat and cuts into funding we could provide to more effective organizations.
I’m finding this to be a really big question: do you think you could define what you mean by Effective Altruist?
Great post! Here’s another possible counter-point: The traditional EA interventions have been easy to quantify: bed nets, cash transfers, deworming, online-ads, leaflets, etc. As we get better at evaluating interventions we tend more towards harder-to-quantify stuff such as influencing politics. What makes the former interventions easy to quantify? One attribute is the fact that they consist of small things bought in large quantities. These are easy to study with RCTs. Running RCTs on areas where salaries are the thing to be funded is impractical.
So if the trend away from easy-to-quantify areas continues we can expect to put more of our money into salaries. This yields two reasons we may need more direct work and fewer EtG: 1) Hiring people is a lot less scalable which means less money is needed per intervention, 2) We may have to create new positions and fill them with EAs (e.g. what xrisk orgs do) or we may have to fill areas with EAs (e.g. politics).
Thanks! To be fair, I do feel like my argument takes into account this counter-point pretty fully, especially the section “The Problem With Funding Salaries”. But you’re right that the more we fund salaries, the weaker this argument becomes.
True, I totally overlooked that - I shouldn’t write comments when I’m sleepy ;)
Based on how each subsequent election cycle seems to be more expensive than the last in, e.g., the United States and the United Kingdom, I’m terrified by the possibility by how much it would cost those earning to give to fund a campaign by themselves. Like, thinking about how many lives that money could counterfactually save, and there isn’t even a guarantee an EA-funded candidate would get elected. Depending on how serious EAs interested in politics are, we better figure out how raise funds from outside effective altruism. and run successful campaigns before one of us starts running. With its connections to other researchers who could help on such a project, and their current research experience with normative rationality, evidence-based decision-making, and counterfactual reasoning, 80,000 Hours seems to be best poised to carry out this research among EA orgs.
Interesting, I hadn’t thought of the possibility to use EtG money to fund campaigns rather than just having EAs raise the money as other politicians do.
Also, as we get better at measuring things we might open up new giving opportunities, such as those currently being looked at by The Open Philanthropy Project.
I think this would vary greatly by cause area—I see global poverty as primarily funding constrained (largely due to the fact that much of it involves transferring wealth). Unsure about existential risk, but I think animal causes are more human capital constrained. It’s interesting what Jacy said about ACE—I’m curious if he would extend that to animal charities more broadly. It seems to me like the sorts of things that would make a difference for animals could use more organizers and charismatic personalities relative to money.
That would surprise me. While GiveWell has billions (e.g., Good Ventures) and AI Risk reduction has millions (e.g., Elon Musk), EA animal causes have maybe hundreds of thousands at most. (Note that this ignores PETA, which does have tens of millions, but I’m not sure it’s really going to animals as EAAs would define the cause area.)
Maybe animal causes are just talent constrained and funding constrained, but I’ve heard more “I wish we had more money to make our salaries more competitive” and “I wish we could hire for a position X but we don’t have the money” than “We have $50K lying around for this job offer but can’t find anyone to take it”.
This also makes sense given that I think animal causes have a good capacity to hire from outside EA—there are lots of motivated animal activists who haven’t heard of EA yet (though they may be hostile to the idea). If I recall correctly, Jon Bockman was this kind of hire.
My thinking was that (and show me the holes in this—it may affect major life decisions!) animal causes are more human capital constrained because more people willing to borderline starve would be useful. You definitely hear more people say “I wish we had more money...” than “We have $50K lying around...” but there are two ways to solve that—more money or someone willing to live on less than $50K, and I think the latter is likely to be more important. Given the record of the movements that seem to me to most resemble animal rights, it seems the vast majority of the work will be done by volunteers, so the primary need is more volunteers rather than more money.
Someone who would take a $25K salary instead of a $50K salary is effectively “donating” $25K. So if you think you could ETG more than that, you’d be beating that, from that perspective.
The stronger perspective is the perspective that we need more people in the animal rights movement to steward the money we already have, to create new funding opportunities, or to do good work such as to inspire more respect and thus more funding.
I’m not involved with the animal rights movement outside of its intersection with effective altruism, so I don’t know much about it. However, among other things, I’d think the evaluators at ACE are involved with the AR movement, and would come out and said at there EAG talks that the community is just as, if not more, constrained by lack of volunteers than lack of funds. They didn’t prioritize raising awareness of a greater volunteer need than a greater funding need. Of course, they were optimizing for an effective altruism audience. So, maybe the most the average effective altruist can do, one who has or will have a career which is not primarily low-paid or volunteer work for animal liberation, and who is already planning on earning to give or whatever, is donate to, e.g., ACE’s top recommended charities. That’s not necessarily an argument for the rest of the AR movement as it exists, or anyone new who joins it, to mostly go earning to give, rather than volunteering.
+1 to this sentiment. I too would like to know if I’m ignorant or wrong about the future or present status of the animal rights movement.
Agree with this. AI risk seems the least funding constrained. My guess is global poverty is more talent constrained than funding constrained, but still somewhat funding constrained. Animal cause seems the most funding constrained. EA orgs might fall between global poverty and AI risk.
I’m not sure I agree with these comparisons.
I think MIRI has a good case that they can hire top math talent without them being EAs, provided they get enough money in their fundraiser, which they suggest has as much as $5.4M in additional room for funding.
Meanwhile, global poverty also appears to have about as much room for more funding.
Animal causes have relatively much less room for more funding just because there’s much less infrastructure set up right now to spend those funds. I doubt animal causes could absorb any more than $2M productively right now. But I hope this could change over the next five years...
Of course, each of the cause areas also have a lot of room for exceptionally talented people to make them better. I imagine someone who can start a new global poverty charity as good as AMF should certainly do that, even if they could get an ETG job at $1M a year otherwise.
The extent to which a cause is funding constrained doesn’t equal the size of its room for more funding. It’s more to do with how much progress you can gain per unit of money compared to a unit of talent.
Global poverty has large room for more funding, but I still suspect it may be more talent constrained than funding constrained, because a talented person can do a lot more through setting up new nonprofits, policy or research than etg.
I agree MIRI has a funding gap, but all the other xrisk research groups have a lot of funds, and are concerned they may not find sufficiently good researchers to hire. Moreover, there are major donors (e.g. Open Phil) ready to put more funds into AI risk research, but don’t think there’s enough good people available to hire yet.
Building on what Peter said, Nick Cooney in addition to Jacy said not just ACE, but charities like ACE’s top recommendations are also funding constrained. If I recall correctly, Mr. Cooney said at EA Global something like:
Note this isn’t a paraphrase, but me attempting to directly quote Mr. Cooney as best as I can remember. This is how he started his third of the “Animal Advocacy Triple Talk”. As senior staff at both Mercy For Animals and The Farm Sanctuary, he would know, and it appears he meant to prioritize and emphasize this practical point.
Givewell has said in the past that finding the right talent is a bottleneck problem they can’t just solve by receiving more money. Animal advocacy and liberation seems to have the opposite problem, where they need tons of both. More money might help animal charities better search for and/or attract the talent, but I don’t know enough about that. I’m seeking an interview with Nick Cooney for this Forum, but I haven’t heard back from him yet. If or when I do, I will ask him about this.
Yeah, this depends greatly on views of the optimal strategy for approaching animal activism. Nick Cooney definitely favors a more money-intensive approach where you spend money to conduct ad campaigns pressuring corporations and publicizing various videos. Other activists favor a more grassroots approach where funding is far less essential (though still valuable, to be clear, and often to a greater degree than the grassroots will admit). So I think what he said indicates more about the particular needs of those organizations than the movement as a whole, but I could be wrong.
Yeah, I forgot your priority cause is animals, so you’d know better. I’m just going off of what Mr. Cooney said, so take my report with a grain of salt (which you are doing).
This may not be the best place to ask, but I’m wondering why “the criteria for earning-to-give” includes “>=$60K annual income”? To me, that seems to be a high minimum that would exclude many who are (at least in their own minds) E2G.
The income cutoff is ultimately arbitrary and shouldn’t be thought of as a hard line, where someone earning $59.99K is definitely not ETG and someone earning $60.00K definitely is. But I do think there has to be a cutoff somewhere, as it’s supposed to be about taking a “high earning” job.
I don’t mean to suggest that people who are in, e.g., $45K/yr jobs dutifully donating 10% aren’t important, of course. The $4.5K/yr still makes a big difference—probably saving at least one life a year!
I normally define it as someone who deliberately sought high-er earning work in order to donate more, rather than high earning absolutely.
I want to expand what’s in the second sentence here. There are a substantial number of EAs already working at a job that they like, that makes them enough money to be able to reasonably donate lots of it. But most of them are probably over 25. Which is only half of EAs—according to this survey the median age is 25.
So since probably much of 80k’s audience is still choosing their career from an earlier stage, like still-in-university or fresh-out-and-unemployed or haven’t-chosen-a-major-yet, it makes sense to me that 80k wouldn’t emphasize earn to give for these people.
I’m also not sure the 15% funding the 85% quite holds. CFAR, for example, gets lots of donations but also gets money from people attending workshops. I don’t know the details, but I’d expect object-level charities like AMF to be able to have fairly wide appeal and to therefore get a decent amount of money from people who don’t identify as EAs. I’m not actually confident on that point and would welcome evidence in any direction about it.
They can, but the idea is with organizations like AMF and GiveDirectly is they can absorb relatively massive amounts of donations, and still be the best bang for anyone’s buck. I.e., even if Givewell’s top recommended charities can receive lots of money from both within and outside of effective altruism, they’ll still turn out to be the most effective. Of course, this will depend on which cause your prioritize. As Tom Ash commented:
Here’s a completely different route for arguing that giving money may be one of the most effective possibilities for improving the lives of others.
Income inequality is at historic high levels, both globally and in the US (see e.g. http://www.networkideas.org/networkideas/pdfs/global_inequality_ortiz_cummins.pdf)
Income inequality is robustly correlated with unhappiness (see e.g. http://www.lisdatacenter.org/wps/liswps/614.pdf)
Therefore, there may be a large opportunity in income redistribution.
I realize this is not a quantitative analysis, partially because “happiness” is so difficult to quantify in a meaningful way. In particular I don’t know how to relate the various happiness measures in use to something like QALY (which suggests to me that QALY is not an ideal utilitarian metric.) Also, the correlational analyses could be muddled by confounders, meaning we could decrease inequality and still have a sad population for other reasons. However, I note that distributional issues have been at the center of politics for as long as there have been politics, so it’s something that humans seem to care about a lot.
Previous generations’ answers to the distributional problem have included e.g. democracy, pensions, Marxism, and universal health care. Advocating earning to give could be a seen as an individual-level redistribution strategy. But one could also advocate for political reforms that might address these inequalities—they could have very large upside as well.
Huge income inequality might also just mean our most powerful way to help others is via money rather than via labour.
This seems as much an argument for growing earning to give absolutely outside of effective altruism as it currently exists as it is an argument for there being an increased proportion of existing effective altruists to pursue earning to give.
Yes. But then, shouldn’t all arguments about what is appropriate for EA’s to do generalize to what it is appropriate for everyone to do? Isn’t that the fundamental claim of the EA philosophy?
I don’t think so. I meant your above argument is one for effective altruism to grow, and that growth to primarily be driven by people who go into earning to give. That doesn’t mean everyone should earn to give. If effective altruism grew indefinitely, there would be a point at which there are diminishing marginal returns for more earning to give relative to other options people would pursue. Your argument makes the case this is true for the relative proportion of earning to give within effective altrusm, but also seems to me to imply the amount of earning to give in the world should grow in its absolute quantity as well. This doesn’t imply, however, that 50% of anyone who could earn to give should, nor that everyone should do what effective altruism prescribes now. If effective altruism did become a community of, say, tens of millions of people, what effective altruism would have the marginal person do in that case would likely look much different than what it recommends people do now. I believe the fundamental claim of the EA philosophy isn’t that the arguments from effective altruism should necessarily generalize to everyone, but should generalize to the marginal, i.e., next person who adopts effective altruism. What this generalization is changes as the number of effective altruists grows. However, effective altruism is very far from a number of people such that it would change all its recommendations to the average or marginal community member.
If I understand you correctly I think you make two interesting points here:
the potential of EA as a political vehicle for financial charity
The current EA advice has to be the marginal advice
When I wrote “isn’t that the fundamental claim of EA” I suppose more properly I am referring to the claims that 1) EA is a suitable moral philosophy 2) the consensus answers in the real existing EA community correspond to this philosophy. In other words that EA is, broadly speaking, “right” to do.
I’m going to address both of your above questions with one answer. So, effective altruism is sort of a moral philosophy, but it’s not as complete or at all formalized a system as most religious deontologies, utilitarianism, or other forms of consequentialism or deontology. Virtue ethics is like effective altruism in that it runs on heuristics rather than the principles of deontology, or the calculations of utilitarianism. I think virtue ethics and effective altruism are similar in how they output recommendations in such a way they attempt to be amenable to human psychology. However, with it’s own heuristics, virtue ethics has thousands of years of ancient and modern philosophy from every civilization to build upon and learn from. Effective altruism is new.
There are three types of ethics in formal/academic philosophy: normative ethics, the ethics of what people should do generally; practical ethics, the ethics of what people should do in specific and applied scenarios; and meta-ethics, the philosophy and analysis of ethics as a discipline in its own right. When anyone thinks of any one ethical system, or “philosophy”, such as Kant’s categorical imperative, or preference utilitarianism, or Protestant ethics, it’s almost always a system of normative ethics. Because of how different effective altruism is, what with it trying to mimic science in so many ways to figure out existing goals, and accomodating whatever normative system people used to reach the conclusion of their moral goals, so long as they converge on the same goals, effective altruism seems more like a system of practical rather than normative ethics. This makes it difficult to compare to other moral systems. The fact there seems to be missing the way by which effective altruism determines which moral goals are worth pursuing is a fair criticism lobbed at the philosophy in the past, and one philosophers like Will MacAskill and Peter Singer research to solve without forcing effective altruism to conform to one normative framework. That seems to be the role of meta-ethics in effective altruism. As it grows, though, effective alturism is becoming less necessarily theoretical or normative in its formulation. It’s a movement started by philosophers which may, in fulfilling its goals, may become less philosophical and more pragmatic.
That’s a challenge. It’s a unique challenge. Effective altruism seems a suitable moral philosophy to me, for more reasons than the fact it can be made consistent with other ethical worldviews, whether deontological or consequentialist, religious or secular. From a practical perspective, I think effective altruism is “right”, but because it’s so odd among intellectual movements, I’m not sure what to compare it too.
“The fact there seems to be missing the way by which effective altruism determines which moral goals are worth pursuing … That seems to be the role of meta-ethics in effective altruism.”
Maybe the answer is not to be found in meta-ethics or in analysis generally, but in politics, that is, the raw realities of what people believe and want any given moment, and how consensus forms or doesn’t.
In other words, I think the answer to “what goals are worth pursuing” is, broadly, ask the people you propose to help what it is they want. Luckily, this happens regularly in all sorts of ways, including global scale surveys. This is part of what the value of “democracy” means to me.
A man named Horst Rittel—who also coined “wicked problem”—wrote a wonderful essay on the relationship between planning for solving social problems and politics which seems appropriate here http://www.cc.gatech.edu/~ellendo/rittel/rittel-reasoning.pdf
tl;dr some kinds of knowledge are instrumental, but visions for the future are unavoidably subjective and political.
I’m not averse to such an approach. I think the criticism how effective altruism determines a consensus of what defines or philosopically grounds “the good” comes from philosophers or other scholars who are weary of populist consensus on ethics when it’s in no way formalized. I’m bringing in David Moss to address this point; he’ll know more.
<Maybe the answer is not to be found in meta-ethics or in analysis generally, but in politics, that is, the raw realities of what people believe and want any given moment, and how consensus forms or doesn’t.
In other words, I think the answer to “what goals are worth pursuing” is, broadly, ask the people you propose to help what it is they want. Luckily, this happens regularly in all sorts of ways, including global scale surveys.>
I guess it depends on what you mean by “what people believe and want any given moment.” If you interpret this as: the results of a life satisfaction survey or maximising preferences or something, then the result will look pretty much like standard consequentialist EA.
If you mean something like: the output of people’s decisions based on collective deliberation, e.g. what a community decides they want collectively as the result of a political process, then it might be (probably will be) something totally different to what you would get if you were trying to maximise preferences.
Which of these is closest to the thing meant?
I believe one aspect of Earning to Give which is understudied and would have significant impacts on these calculations, is the long term viability of giving rates on an individual level. The earning to give strategy necessarily places altruistic people in the midst of largely non-like minded individuals for decades at a time. In what world do we not think this will have an effect on the working givers? To not consider defection rates is naive at best and sloppy science at worst.