What is you high-level on take on social justice in relation to EA?
Niel_Bowerman
Hi Lauren, this is Niel from 80,000 Hours. We’ve already discussed this over email, but I’m excited that new organisations are being set up in this space. 80,000 Hours has limited resources and is not planning on increasing the amount we invest in improving our advice for animal advocates in the near term. I’m hopeful that Animal Advocacy Careers will be able to better serve the animal advocacy community than we can. Best of luck with the project!
In the current regime (i.e. for increases of less than ~4 degrees C), warming is roughly linear with cumulative carbon emissions (which is different from CO2 concentrations). Atmospheric forcing (the net energy flux at the top of the atmosphere due to changes in CO2 concentrations) is roughly logarithmic with CO2 concentrations.
How temperatures will change with cumulative carbon emissions at temperatures exceeding ~4 degrees C above pre-industrial is unknown, but will probably be somewhere between super-linear and logarithmic depending on what sorts of feedback mechanisms we end up seeing. I discuss this briefly in at this point in this talk: https://youtu.be/xsQgDwXmsyg?t=520
Btw, your link to FAO feedback on Indonesian broiler chickens leads to a discussion about Latvian egg-laying hens instead.
I think working on AI policy in an EU context is also likely to be valuable, however few (if any) of the world’s very top AI companies are based in the EU (except DeepMind, which will soon be outside the EU after Brexit). Nonetheless, I think it would be very helpful to more AI policy expertise within an EU context, and if you can contribute to that it could be very valuable. It’s worth mentioning that for UK citizens it might be better to focus on British AI policy.
Am I correct in thinking that under the UK tax system the consideration you have outlined in your post does not apply (because we do not have a standard deduction if I understand correctly), and in fact the opposite becomes true once you hit the higher tax bracket for the reasons outlined by John_Maxwell_IV above? If you are not earning above the higher tax bracket then the two approaches would be equivalent?
Yeah, I imagine there’s some version on the subdomain option that could work. I’ll put this on Kerry Vaughan and Tyler Alterman’s radars as they are now managing those domains.
eaforum.org now redirects to this forum
Marketing Effective Altruism: What can we expect from book sales?
Hi Jorgen, Great to hear that you guys are planning this. I’d be happy to chat with you about it sometime, and offer some thoughts. My availability is here: calendly.com/niel-bowerman/2 You should also talk with Chris Jenkins (chris@centreforeffectivealtruism.org) if you are considering doing a bulk order.
Looking forward to speaking.
Niel
To clarify, while ‘The Most Good You Can Do’ is not a CEA project, in that we do not own rights to the book, it is a CEA project in that we are coordinating the global marketing campaign for the book with Goldberg McDuffie Communications (USA), Yale University Press (USA), Yale University Press UK (Europe), Text Publishing (Australia), and The Life You Can Save doing the bulk of the work. We will be playing a similar role for William MacAskill’s book, except that the holding company for the rights for Will’s book is contractually obliged to donate the royalties beyond the advance to CEA.
I imagine that EA Advocates will promote a range of both CEA and non-CEA projects, where the requirement is that the actions take by participants have a particularly high value.
Thanks for these comments Peter. I think I agree with most of them. To respond specifically to the one I have additional information about:
In the interview with Tim Harford, Elie and Niel discussed SCI, and Tim Hartford decided to donate there at the end of the show. After the appearances, SCI contacted us to report that they had received several £1000s of donations as a result of our media. The exact amount SCI received as a result of this media attention was difficult for them to estimate relative to the variable background rate, but they suggested it may have been as much as £10,000.
That’s pretty awesome. How does SCI estimate that? It does seem pretty difficult to me.
If I understood Alix at SCI correctly the rate of online donations in the few days after the show and associated article was many times higher than usual (perhaps even more than an order of magnitude higher—I can’t remember exactly), and so they were estimating the difference between the increased rate and the background rate. This assumes that the spike and the additional donations were due to the media attention, which may well be a false assumption, but given the immediacy of the spike in donations, the scale of the spike, the prominence of the media attention, and the prominence of SCI in the media attention, I am inclined to think that most of the spike was probably down to the media attention. One other thing to note is that if I understand correctly this figure only includes donations direct to SCI, and does not include any donations made to SCI via GiveWell, who were also featured prominently in the media attention. Nonetheless I agree that it is difficult to estimate exactly how much additional donations went to SCI.
This would save me enough time that I’d happily pay £s per newsletter! Thanks so much for offering to put this together.
Will you make prices and projects public after the first round so that we can calibrate?
What is your assessment of the recent report by FHI and the Global Challenges Foundation? http://globalchallenges.org/wp-content/uploads/12-Risks-with-infinite-impact-full-report-1.pdf
How will your integrated assessment differ from this?
How many man-hours per week are currently going into GCRI. How many paid staff do you have and who are they?
I can’t make it for the AMA, but I’m going to load up some questions here if that’s OK…
What would you say is the single most impressive achievement that GCRI has achieved to date? (I’ll put other questions in other threads)
I agree they are relatively similar. We’ve been keeping the publishers up to date with the plans of the other authors and publishers that are publishing books on EA in 2015. Thus the publishers think that these dates are pretty optimal in terms of when we would want them all released: spaced out enough that each can get its own media coverage and attention, but close enough that people can write about the trend and broader movement of EA with so many books coming out around the same time. I am a little worried that they will compete for attention, which is part of the reason why I’m coordinating both Will and Peter’s marketing, so that they can collaborate where possible. I’ve been thinking about this quite a bit recently, and I’ve settled on thinking that each book trying to maximise its own success is actually going to be really quite close to optimal, so I’m going to be adopting a strategy that is not far from that. Essentially, the chances of any one promotional push putting a lot of media attention on EA is relatively small, and so we want as many rolls of the dice as possible.
Hi Chris,
This is a good question. Many of the sub-projects that we are doing are one-off opportunities which we are unlikely to seek funding for in future years (e.g. a publicist for Will and Peter). Other projects are experiments that we would like to repeat and/or expand in future years if they are successful, such as EA Global, the EA Fellows Programme, etc.
EA Outreach as a whole is also in this category—if it is successful (or more accurately if it looks in hindsight like it was a worthwhile bet) then we would like to continue working on it and funding it. On the otherhand if the project is not successful (or more accurately does not look like it was a worthwhile bet), then we would like to discontinue it. My guess at this stage is that we will want to seek further funding for EA Outreach activities in future years, as this seems to be an under-invested area within the EA movement and we seem to be well placed to execute on it, however much will depend on how much we achieve over the coming year.
I hope that answers your question, and let me know if you have any others.
Cheers,
Niel
Great question. I’m afraid I only have a vague answer: I would guess that the chance of climate change directly making Earth uninhabitable in the next few centuries is much smaller than 1 in 10,000. (That’s ignoring the contribution of climate change to other risks.) I don’t know how likely the LHC is to cause a black hole, but I would speculate with little knowledge that the climate habitability risk is greater than that.
As I mentioned in the talk, I think there are other emerging tech risks that are more likely and more pressing than this. But I would also encourage more folks with a background in climate science to focus on these tail risks if they were excited by questions in this space.