Am I correct in thinking that under the UK tax system the consideration you have outlined in your post does not apply (because we do not have a standard deduction if I understand correctly), and in fact the opposite becomes true once you hit the higher tax bracket for the reasons outlined by John_Maxwell_IV above? If you are not earning above the higher tax bracket then the two approaches would be equivalent?
Niel_Bowerman
Yeah, I imagine there’s some version on the subdomain option that could work. I’ll put this on Kerry Vaughan and Tyler Alterman’s radars as they are now managing those domains.
Hi Jorgen, Great to hear that you guys are planning this. I’d be happy to chat with you about it sometime, and offer some thoughts. My availability is here: calendly.com/niel-bowerman/2 You should also talk with Chris Jenkins (chris@centreforeffectivealtruism.org) if you are considering doing a bulk order.
Looking forward to speaking.
Niel
To clarify, while ‘The Most Good You Can Do’ is not a CEA project, in that we do not own rights to the book, it is a CEA project in that we are coordinating the global marketing campaign for the book with Goldberg McDuffie Communications (USA), Yale University Press (USA), Yale University Press UK (Europe), Text Publishing (Australia), and The Life You Can Save doing the bulk of the work. We will be playing a similar role for William MacAskill’s book, except that the holding company for the rights for Will’s book is contractually obliged to donate the royalties beyond the advance to CEA.
I imagine that EA Advocates will promote a range of both CEA and non-CEA projects, where the requirement is that the actions take by participants have a particularly high value.
Thanks for these comments Peter. I think I agree with most of them. To respond specifically to the one I have additional information about:
In the interview with Tim Harford, Elie and Niel discussed SCI, and Tim Hartford decided to donate there at the end of the show. After the appearances, SCI contacted us to report that they had received several £1000s of donations as a result of our media. The exact amount SCI received as a result of this media attention was difficult for them to estimate relative to the variable background rate, but they suggested it may have been as much as £10,000.
That’s pretty awesome. How does SCI estimate that? It does seem pretty difficult to me.
If I understood Alix at SCI correctly the rate of online donations in the few days after the show and associated article was many times higher than usual (perhaps even more than an order of magnitude higher—I can’t remember exactly), and so they were estimating the difference between the increased rate and the background rate. This assumes that the spike and the additional donations were due to the media attention, which may well be a false assumption, but given the immediacy of the spike in donations, the scale of the spike, the prominence of the media attention, and the prominence of SCI in the media attention, I am inclined to think that most of the spike was probably down to the media attention. One other thing to note is that if I understand correctly this figure only includes donations direct to SCI, and does not include any donations made to SCI via GiveWell, who were also featured prominently in the media attention. Nonetheless I agree that it is difficult to estimate exactly how much additional donations went to SCI.
This would save me enough time that I’d happily pay £s per newsletter! Thanks so much for offering to put this together.
Will you make prices and projects public after the first round so that we can calibrate?
What is your assessment of the recent report by FHI and the Global Challenges Foundation? http://globalchallenges.org/wp-content/uploads/12-Risks-with-infinite-impact-full-report-1.pdf
How will your integrated assessment differ from this?
How many man-hours per week are currently going into GCRI. How many paid staff do you have and who are they?
I can’t make it for the AMA, but I’m going to load up some questions here if that’s OK…
What would you say is the single most impressive achievement that GCRI has achieved to date? (I’ll put other questions in other threads)
I agree they are relatively similar. We’ve been keeping the publishers up to date with the plans of the other authors and publishers that are publishing books on EA in 2015. Thus the publishers think that these dates are pretty optimal in terms of when we would want them all released: spaced out enough that each can get its own media coverage and attention, but close enough that people can write about the trend and broader movement of EA with so many books coming out around the same time. I am a little worried that they will compete for attention, which is part of the reason why I’m coordinating both Will and Peter’s marketing, so that they can collaborate where possible. I’ve been thinking about this quite a bit recently, and I’ve settled on thinking that each book trying to maximise its own success is actually going to be really quite close to optimal, so I’m going to be adopting a strategy that is not far from that. Essentially, the chances of any one promotional push putting a lot of media attention on EA is relatively small, and so we want as many rolls of the dice as possible.
Hi Chris,
This is a good question. Many of the sub-projects that we are doing are one-off opportunities which we are unlikely to seek funding for in future years (e.g. a publicist for Will and Peter). Other projects are experiments that we would like to repeat and/or expand in future years if they are successful, such as EA Global, the EA Fellows Programme, etc.
EA Outreach as a whole is also in this category—if it is successful (or more accurately if it looks in hindsight like it was a worthwhile bet) then we would like to continue working on it and funding it. On the otherhand if the project is not successful (or more accurately does not look like it was a worthwhile bet), then we would like to discontinue it. My guess at this stage is that we will want to seek further funding for EA Outreach activities in future years, as this seems to be an under-invested area within the EA movement and we seem to be well placed to execute on it, however much will depend on how much we achieve over the coming year.
I hope that answers your question, and let me know if you have any others.
Cheers,
Niel
What do you see as the biggest risks and failure modes of EA Outreach?
Some of the most salient failure modes for EA Outreach are in the individual sub-projects:
For Will and Peter’s books, the outside-view median outcome is that they don’t make a splash in the media and don’t sell very well. Unfortunately there are just so many books published each year (~1m per year) that the outside-view chances of ours being one of few that gain considerable attention and sell well is slim. Even when you account for the fact that we have a substantial advance and top-tier publisher, the outside view says that we’ll only sell a moderate number of books and will be unlikely to make more than the advance. Inside view says that we’ll do better than this because of the amount of resources going into the book, the fact that there is a movement behind the book, and because people seem pretty interested in EA-style questions at the moment. The reason we are doing this is not for the median case though, it’s for the upper tail in which we become a best-seller and EA becomes well known enough that media hosts feel the need to include it in their discussions of charity, philanthropy, and doing good. The outside-view chances of this happening are slim, and the inside-view chances are better but not huge, though we have been told by publishers, publicists, etc. that it is a real possibility. I will be working very hard over the coming year to give these books the best possible launches I can, but unfortunately the risk is still probably the biggest one that we are taking.
My most salient worry for EA Global is that it doesn’t sell tickets, people don’t come, and it makes a major loss. We are going to be marketing it hard, and part of the reason for moving to a global model is that it makes it easier for more people to come as they won’t have to travel as far.
My biggest worry for EffectiveAltruism.org is that it doesn’t get much traffic. There are now many popular sites that discuss EA, and though EA.org is ranked at no. 5 when I do an incognito search for ‘Effective Altruism’, my main worry is that it won’t get enough traffic. My other worry is that we will never actually finish building it as other higher-priority projects will take precedent, but I don’t see that as as much of a risk.
I suppose my worries can be put into two broad categories, either we fail to get enough attention for EA, or we get the attention and fail to convert it sufficiently into growth of the movement. I think both of these are very real possibilities that we are working every day to reduce the chances of.
Does the claim “We are currently due to run out of funding next month” include the £62,500 donation? It seems like not, but you didn’t insert any caveats about that into your claim. At any rate, what’s the situation for marginal funds? What do you anticipate getting cut if you don’t meet your goal, and what would you do with funds over your budget (or will you just stop accepting donations)?
Unfortunately the £62,500 donation is only a ballpark figure at the moment and won’t be confirmed until late December or January. Sorry that I didn’t make this clearer.
The first thing that would be cut from the budget is an external publicist for Will’s book. We would have to rely on Penguin to do much of the publicity, and we would do as much as we had time for in-house as well. I would probably want to fundraise additional funds to hire a summer intern to help with marketing and pitching media outlets in this case.
You can see the full list of everything that we could fund if money was available in this spreadsheet (which uses this now-outdated documentation). The budget that we are using to fundraise includes only a small fraction of these opportunities, as they are the ones that we most wanted to fund.
At time of writing, we need an additional £15k on top of our current pledges to be able to fund our top priorities except Will’s publicist. Paying for Will’s publicist would require another ~£19k on top of that.
Four. How does funding this related to funding CEA or other CEA sub-projects? It seems like part of your budget is actually a part of CEA central’s expenses, so presumably donations are somewhat fungible between the two?
The ‘Central Team’ within CEA can be thought of as providing services to the projects that it incubates, and so the projects split the costs of ‘Central’ CEA according to a splitting algorithm. Historically, unrestricted donations to CEA have been split following an algorithm between the different projects that it incubates. In 2015, the use of unrestricted donations is likely to change somewhat, and is likely to include some fraction going to the different projects within CEA, as well as some used to support the creation of new projects, and potentially some to be assigned discretionarily by the trustees. If you were to donate to CEA unrestricted in 2014, approximately 11% of your donation would have gone to EA Outreach, with the remainder going to the Global Priorities Project, 80,000 Hours and Giving What We Can.
Two. What’s the difference between Will and Peter’s books? Their titles are extremely similar, so it’s hard to tell...
While the books are on a similar topic, they approach effective altruism from slightly different angles. For example:
Peter’s book is probably more focused on the altruism side of EA, while Will’s focuses more on the effectiveness side.
Peter’s book focuses slightly more on big giving and the good you can do with your money, whereas Will’s takes in a wider range of topics from career choice to consumerism.
Will’s book discusses a wider range of potential causes than Peter’s book (I think, I’d have to double check to be sure)
Hi Ben,
Great questions as always. I’m going to hand a couple of these over to Kerry Vaughan, but I’ll take a shot at answering most of them. Again, I’m afraid my answers are too long to fit into a single comment, so I’ll answer questions one-by-one.
It’s great that so many people are working on giving lots of people a positive initial impression of EA. But my sense is there’s a pretty big gap between “initial impression of EA” and “EA is a big part of my life” that isn’t being filled very well right now. Are there any plans to work on these later stages of the EA pipeline?
I agree that there is a need here. On p2 here I outline how our activities can be thought of as fitting into this pipeline. Some of them are earlier in this pipeline (making it easier for people to get up to speed on the ket ideas in EA at effectivealtruism.org), while others are later (the EA Fellows Programme). The main activities targeting the later stages of this pipeline are:
EA Global, which is designed to allow lots of people to meet face-to-face to make it easy for people to dive into the community.
EA Fellows Programme, which is intended to provide an opportunity for a handful of high-potential people who are interested in EA to make it a much bigger part of their life.
And finally EA Ventures which we hope will providing funding for more people to work full-time on EA projects.
In my experience becoming very engaged in EA often comes about as a result of a large amount of one-on-one interaction with people in the community, so we hope to build some tools into EffectiveAltruism.org to make this easier. I think that local chapters are likely to be a key part of how people become more involved, and I’m always interested to hear ways in which we might be able to help local chapters grow, so if you do have ideas let me know. I hear that you’re doing great work in this area already, so perhaps you have some suggestions?
Ultimately I think there is so much work to be done in the area you’ve mentioned that I would hope that there are people dedicated specifically to this aim in the future, and this is something that we are hoping to develop in CEA in time.
A lot of these learnings are written up in the various organisations’ annual and six-monthly reviews such as https://www.givingwhatwecan.org/sites/givingwhatwecan.org/files/Jacob%20Hilton/giving_what_we_can_six_month_review.pdf and https://80000hours.org/2014/05/summary-of-the-annual-review-may-2014/
Unfortunately I think that much of our learning in areas like marketing is not generally applicable enough to be useful to more than a dozen or so people in the world right now. We are talking with these people already and generally I find those conversations to be more useful than spending an equivalent amount of time writing up learnings because we can tailor the conversation to specific circumstances.
For example, writing up my policy learnings ( http://effective-altruism.com/ea/7e/good_policy_ideas_that_wont_happen_yet/ ) took me at least 1.5 days, and it is unclear to me whether this was better than having 15 one-hour conversations with interested people. This was a case where I had particularly well-organised thoughts and potentially novel insights, so I find it likely that in cases where I have less-insightful and worse-organised thoughts it would be better for me just to have the conversations instead, which is the route I am currently going down with a lot of this stuff.
I would be interested in your thoughts on this as someone who does take the time to write up substantial amounts of your thinking. How do you compare the trade-off against spending the same amount of time simply having conversations with people? I’m pretty open to the idea that I’m not spending enough time writing up my learnings, but at the moment I’m trying to focus my effort on conversations instead as I think that’s where more value lies.
These are more great questions Ben. Do you mind if I come back to you on them on Monday as I’m going to try and take today as a day off? Thanks in advance.
I think working on AI policy in an EU context is also likely to be valuable, however few (if any) of the world’s very top AI companies are based in the EU (except DeepMind, which will soon be outside the EU after Brexit). Nonetheless, I think it would be very helpful to more AI policy expertise within an EU context, and if you can contribute to that it could be very valuable. It’s worth mentioning that for UK citizens it might be better to focus on British AI policy.