Another potential explanation (entirely compatible with the above): Giving What We Can gradually started taking paid staff from the latter half of 2012; this was a big injection in terms of person-hours and money going into GWWC. Between start-up costs and the delay in terms of new initiatives being implemented and people actually joining as a result of those initiatives, the effects of this injection were only felt one year later.
William_MacAskill
There’s also the feedback we get in talks, and the comments on all the articles and media attention we’ve gotten, which is very extensive. I’ve also presented on these topics in an academic setting.
And I asked for feedback here: https://www.facebook.com/wdcrouch/posts/10100610793427240?stream_ref=10
From this, I feel I know the most common criticisms of EA (as practiced, rather than in theory) pretty well.
doesn’t appreciate the importance of systemic change
too focused on the measurable rather than unquantifiable benefits
smuggles utilitarian assumptions under the table (e.g. that you can aggregate small benefits and weigh them against large benefits; e.g. that you shouldn’t be much more concerned with avoiding causing harm than with actively doing good) …
However, I haven’t seen a smart outside person spend a considerable amount of time to evaluating and criticising effective altruism. These objections are just the ones that people think of off the top of their head. I’d really like to see what someone who spent e.g. a week investigating EA and criticising it would say.
This piece by George Monbiot represents one strand of potential deep criticism, which is that many goods are incommensurable in value: http://www.monbiot.com/2014/07/24/the-pricing-of-everything/
This is a pretty common view in philosophy, and it would make the EA project much more limited in what it could achieve.
I loved this piece. Between this and Jeff Kaufman’s discussion of John Wesley, founder of the Methodist movement, it makes me wonder how many other effective altruists there are in history.
Very interesting project!
Do you buy certificates on expected good done ex ante or on actual good accomplished? Eg if I previously bought a $1 lottery ticket with an expected value of $2 (“gambling to give”) which didn’t win, can I sell that for $2 or not? I can see arguments either way.
Should I feel reticent about submitting many projects for the information value?
Hi Peter, thanks for the great comments!
First, I find it hard to believe that $2.78 is really generated via online referral; I imagine some >data integrity problems or sampling problems are leading to a large skew. I expect the >marginal value of an additional unique visit to GiveWell is much less than $2.78.
That sounds plausible to me too. Insofar as I’m ultimately concluding that this sort of media courting isn’t justified by the short-run impact, then using an optimistic figure for impact-per-clickthrough is the conservative assumption. Would still be useful to get into this more if someone wants to run an ‘on-line advertising to GiveWell’ project (which is something I’d love to see come out of .impact or EA Ventures—it’s an idea that’s been floated for quite a while).
Additionally, I also suspect that people coming to GWWC and pledging based mostly on an >article are going to be lower-quality pledges in terms of the amount that is actually donated.
Yes, my anecdotal view from previous media rounds was that people who saw GWWC online and immediately joined were a) giving a lot more anyway; b) giving to what are probably less effective organisations. HOWEVER, there are two big aspects of GWWC-related impact from media that are neglected by focusing on immediate member growth: 1) Media attention reaches out to important people who we wouldn’t otherwise be able to contact. GWWC’s largest single donor influence (who’s donated >$8mn (mainly to a DAF) and who attributes 90% of this to GWWC) was via media influence; our contacts in UK government were through media attention; 2) Member increase is normally fuelled by multiple exposures to GWWC, and the media can be a good first exposure. Because of this, quantitative impact assessment is very hard; hence a lot of refection and attempts to be common-sensey.
I have this worry as well. But it’s also possible that while higher-quality articles get lower >viewership, they may get higher-quality viewership in terms of increased response and >conversion rates from the viewers that are attracted.
Yeah, the ‘shit spreads’ explanation would also make sense of some oddities about who the most most high-profile influencers are—I think that Dawkins’ crazy behaviour online probably contributes to his popularity-as-measured-by-twitter-followers.
Additional reason against controversiality is that it keeps option value—you can always be more controversial later on. However, I’m also starting to notice that it’s very difficult to control this aspect—things that don’t seem controversial to me can get regarded as highly controversial, often through crazy misreading. Example: if you actually read ‘The Skeptical Environmentalist’ by Bjorn Lomborg he says this like ‘obviously climate change is happening and it’s bizarre that people would even question it’ (not an actual quote) yet in the media he was painted as a climate denier.
Doesn’t GiveWell already run a lot of Google Adwords? How well do those do?
They do, because they get them for free. I don’t know how well they do.
Evan:
I’ve got some concerns about how “brand management” might be a shiny veneer to cover >”centralization” from Oxford or the Bay Area, and what the consequences of it may be. >These concerns aren’t about projects in Oxford, per se, but about some aspects of Effective >Altruism Outreach. I’ll save most of them for a later post on the subject broadly. My question >now is: how much will EA Outreach focus upon centralizing and narrowing who shares >information, and to what extent will you make attempts at tamping down other voices in the >movement in terms of “damage control”.
Impala:
That would explain how concerned people mostly come from these places, and why they >have unusually high concern for a social movement.
Thanks for raising this concern. I think that a post on the topic would be really valuable—we certainly don’t want to lose the benefits of diversity, to shut down debate, or to become a bottleneck. I don’t think we have unusually high concern regarding branding within a relevant reference class - almost all businesses are highly concerned by brand; free market promoters are highly brand-concerned. Even PETA is highly concerned by brand management in its own way—e.g. using models and celebrities to push v*ganism as ‘sexy’. For movements with unfortunate brands (I’d put animal rights, feminism, and to some extent environmentalism), that seems like a big problem and one we want to avoid. It’s also a co-ordination problem (see ‘the unilateralist’s curse’), and if you’ve got a diverse range of groups within the movement who differ pretty fundamentally then co-ordination is going to be difficult—and I think that may explain why they suffer these branding issues. This is something we really should try to avoid. (Also, you mention the Bay as well—my impression, which might be wrong (based on conversations with Geoff Anders), is that they agree that it’s a bad problem, but think that there’s not much we can do about it and so don’t actually spend much time on it. I think it’s worth trying, but I agree it’s a judgment call).
In terms of concern for this—this is something I’ve changed my view on a lot over the last 5 years. Because there’s so much bullshit-speak surrounding brand management, I was initially very skeptical of its value. But experience with the different orgs has made me switch my view a lot. The most important example of this was back when 80k was ‘High Impact Careers’ and was quite aggressive in its marketing—focusing on earning to give and banking/doctors comparisons. This was a disaster. People thought we were being deliberately contrarian (and they were right), and we’re even still trying to overcome the impression that 80k is only about promoting earning-to-give. Also people lost the message—e.g. some people thought we were promoting finance in its own right; many people didn’t realise that charity effectiveness is a core part of the argument.
Hey this is a great discussion to have so I’m really glad you posted it. You haven’t changed my views, and I don’t have time right now to go into details, and I haven’t read the comments yet, but I just wanted to raise a couple of points where you think we disagree where we don’t. Note the question we answered:
“At this point in time, and on the margin, what portion of altruistically motivated graduates from a good university, who are open to pursuing any career path, should aim to earn to give in the long term?”
“Long term” in that question is bracketing out the ‘career capital’ argument for EtG which you discuss above. I believe that a higher proportion of ppl should EtG short term than should EtG long term bc of the career capital benefits. (And I think say something similar in the OP).
“open to pursuing any career path” is bracketing out the following consideration: “psychologically, earning-to-give seems to me to be a better fit for the average EA than direct work”. If we were just asking “what % of the EA community should (in a sense of ‘should’ that takes into account people’s psychologies, etc) EtG?” and ran the survey among the 80k team again, I suspect the number would be higher than 15%. (And again, I thought in the OP I mentioned this as an argument for non-EtG; there are many people who are going to EtG whatever happens, so if you’re happy not-EtG that’s a reason in favour of not-EtG).
So I’m wondering what % you’d give in answer to the question we were asking, given clarifications 1 and 2? I’m worried there’s some miscommunication bc you seemed to be answering “What % of the EA community should EtG at any one time” and we were answering a narrower q? (I don’t think we’ll have the same view, but it might be closer)
In general, in “talent constraint vs funding constraint” discussions I find it super important to be clear on exactly what q is being asked as it’s easy to talk past one another.
+1 for ‘Dedicated EAs’ and ‘EAs’. I think 80k internally could describe all it wants to describe in simple English using those terms. It’s naturally a continuum. If you really really need to describe people who are into EA but not that dedicated then ‘less dedicated’ is fine. “Committed” could work too. (I understand ‘dedicated’ to mean: how highly someone scores on the product of ‘into effectiveness’ and ‘into altruism’.)
-1 for ‘full-time’ and ‘part-time’, I don’t think it conveys what we mean (at least, doesn’t to me; I’d be confused when I first heard it) and I’d personally find it annoying to be described as ‘part-time’.
+10000 for ditching ‘hardcore’ and ‘softcore’
As a ‘well-known’ EA, I would say that you can reasonably say that EA has one of two goals: a) to ‘do the most good’ (leaving what ‘goodness’ is undefined); b) to promote the wellbeing of all (accepting that EA is about altruism in that it’s always ultimately about the lives of sentient creatures, but not coming down on a specific view of what wellbeing consists in). I prefer the latter definition (for various reasons; I think it’s a more honest representation of how EAs behave and what they believe), though think that as the term is currently used either is reasonable. Although reducing suffering is an important component of EA under either framing, under neither is the goal simply to minimize suffering, and I don’t think that Peter Singer, Toby Ord or Holden Karnofsky (etc) would object to me saying that they don’t think of this as the only goal either.
Thanks!
People have argued for i) flatter organizational structure ii) pivoting from charity evaluation to >more fundamental research (in order to add more value over and above GiveWell), and iii) >growing emphasis of the EA brand for a while, so it’s good to see this feedback incorporated.
Yeah, I want CEA strategy to be guided significantly by the views of engaged members of the EA community. (Of course, that doesn’t mean we’ll always go with others’ views (not least because different people regularly disagree)). This, it seems to me, has both inside and outside view support. Inside view: when I talk to engaged EAs, they often have interesting and well-reasoned views about what CEA should or should not be doing. Outside view: the current dedicated EAs are the equivalent of the ‘early users’ of EA as an idea, and the standard advice for startups is to pay a huge amount of attention to what early users want, and be responsive to that. I also simply see CEA’s role in significant part as to serve the EA community, so it’s therefore obviously important to know what that community thinks is most important.
Thanks so much for this comment!
Do you have plans to publish summaries of the research you do, e.g. on Wikipedia or the EA Wiki?
Yes, the default will be that everything we produce is published openly.
I’d also challenge you to think about what CEA’s “secret sauce” is for doing this research for >donors in a way that’s superior to whatever other group they would consult with in order to >have it done.
In most cases so far, the counterfactual is little research, rather than using some other consultancy. And in the wider landscape, there seems to be just very little in the direction of what we’d call EA charity recommendations. There’s GiveWell / Open Phil, there’s philanthropic advising that’s very heavily about understanding the preferences of the donor and finding charities that ‘fit’ those preferences, and there seems to us to be a very significant gap in the middle.
Some people have argued against this. I’m also skeptical.
In response to the linked-to article and notes: 1. I’m intuitively also very wary of EA engaging in partisan politics. Indeed, when I think of EA as applied to politics, I think of it as almost being defined by being non-partisan, opposed to tribal politics: where you come to views on policy on a case-by-case basis, weighing all the best evidence, deeply understanding all the various viewpoints (to the point of passing ideological Turing tests), being highly self-sceptical and looking out for ideological bias; 2. It’s also a major issue that whether certain policies are even good or bad can be incredibly difficult to know. E.g. when I think about AI policy, I can think of things where I know the magnitude of the impact of the policy would be very great indeed, but have no idea about the sign of the impact. Or e.g. being pro EU immigration to the UK 10 years ago (surely good! ultimately leads to the unintended consequence of Brexit (oh no, wait, I hadn’t thought about political equilibrium effects).
If that means we should abandon policy and politics as a whole, however, I think that would be wrong. Politics is a huge lever in the world, perhaps the single biggest lever, and to dismiss from the outset that whole method of making the world better would be to far too quickly narrow down our options.
This is an area where it plausibly does make sense to use a non-CEA label.
I agree that we need to think very carefully about what labels we use, and we should be very concerned with how the term ‘effective altruism’ might come to lose its meaning and value, or become the victim of malicious PR.
As a broad question: I understand it’s commonly advised in the business world to focus on a >few “core competencies” and outsource most other functions. I’m curious whether this also >makes sense in the nonprofit world.
Because of this general principle, I stress a lot about how many different things CEA is doing. I’m not sure whether the general principle is right for the sort of organisation we are, and we’re the exception to it, whether the principle just isn’t right for the sort of organisation we are, or whether we’re being irrational. My current instinct is that we should be aiming to focus more than we have done, and that we’ve just taken a good step in that direction.
Thanks!
Fundraising: The current plans it that CEA will fundraise for all projects (with me as lead on that). We’ll update all donors every two weeks with info across all CEA projects (most individual projects already do this to their own donors), and have an annual review.
Earmarking: Fungibility has been a headache since forever; and in the past ‘restricting’ to a particular project, even though we were very careful with the budget lines, wouldn’t completely avoid fungibility concerns, because other donors are responsive to RFMF and would then become a little less likely to donate to a project that’s received more money.
The idea that’s currently in my head, but not (yet) a policy, is that we to a first approximation only accept unrestricted donations, but that every donor is asked to ‘vote’ by telling us how, ideally, they would want their donation to be used. This ‘vote’ isn’t binding on CEA, but gives us useful information about what smart people with money on the line think CEA should be doing more of. I take the views of our donors very seriously—they tend to be the external people who are most highly engaged with CEA’s work—and so it wouldn’t at all just be for show. I’d welcome ideas about other ways of doing donations.
And to be clear, previously restricted money to a CEA project will still be used in the manner it was restricted for, under the new CEA structure, unless the donor tells us that they’re happy to lift the restriction.
I’m sorry we can’t say more at this stake. One downside of policy work is that much more of the work can’t always have the same level of transparency as other projects.
Thanks! Lots of points here.
One thing: despite the confusing name, from CEA’s perspective, EAO was the organisation that included EAG, EAV as parts.
Working with other groups: I hope the new structure will make it quite a bit easier for other groups to co-ordinate with CEA, because the structure will be substantially simpler.
‘Exit assessment’: This is slightly complicated by the fact that there’s no simple “we tried this project and it didn’t work” story here. But I do hope to be able to write more about what things we’ve learned at CEA in the near future.
I think that e.g. talking to someone at 80k can help give you a sense of this—certainly better than nothing. If you’re thinking of leaving earning to give, but people at 80k can think of several examples of people who are currently earning to give and have greater comparative advantage at direct work, then we can at least say that someone’s making a mistake.
Early on in 80k, when promoting earning to give we were regularly getting the opposite argument, that what we were promoting was too much of a sacrifice! I just about agree with you, but I think it’s unclear—there are a lot of people who want to do meaningful work, and don’t care much about giving.
It’s pretty explicit in the original blogpost:
One of the most common misconceptions that we’ve encountered about 80,000 Hours is that we’re exclusively or predominantly focused on earning to give. This blog post is to say definitively that this is not the case. Moreover, the proportion of people for whom we think earning to give is the best option has gone down over time.
To get a sense of this, I surveyed the 80,000 Hours team on the following question: “At this point in time, and on the margin, what portion of altruistically motivated graduates from a good university, who are open to pursuing any career path, should aim to earn to give in the long term?” (Please note that this is just a straw poll used as a way of addressing the misconception stated; it doesn’t represent a definitive answer to this question).
Will: 15% Ben: 20% Rob: 10% Roman: 15%
Instead, we think that most people should be doing things like politics, policy, high-value research, for-profit and non-profit entrepreneurship, and direct work for highly socially valuable organizations.
The purpose of the number was to show the view of 80k (which we perceived most people to not be aware of). I guess the usefulness of it depends on how reliable you think the gestalt judgment of the employees at 80k are.
The UK general election will be 7th May 2015, whereas the main media launch will be August 2015. So won’t we be ok? (But great consideration, that I hadn’t even thought about).