Re: Leisure time. I think I would have probably either taken another class, gotten a part-time paying job as a TA, or done technical research with a professor if I weren’t leading EAB (which took ~10 hours of my time each week). I’m not positive how representative this is across the board, but I think this is likely true of at least some other chapter leaders, and more likely to be true of the most dedicated (who probably produce a disproportionate amount of the value of student groups).
Ajeya
Draft report on AI timelines
Hi Michael! Thanks for engaging with GiveWell research, we appreciate it. As others in the comments have pointed out, many of the critiques in your post would have applied to our earlier November 2015 cost-effectiveness analysis (CEA). Our 2016 CEA has changed a lot, in part because more staff engaged deeply with the kind of population ethics considerations you point out. Because of this, I think most people will not have to make substantial discounts to our 2016 cost-effectiveness estimate of the Against Malaria Foundation to account for their values. I wrote a longer response on the GiveWell blog here: http://blog.givewell.org/2016/12/12/amf-population-ethics/
Thanks Dan! I didn’t know this, I’ll look more closely at the data when I get the chance.
It seems like “deeply committed” is doing a lot of work there. In the last EA survey, it seemed like the median donation from a person who identified as “EA”, listed “earning to give” as their career, was not a student, and believed they should give now rather than give later was $1933. At typical starting software engineer salaries (which I would guess is a typical career for a median “earning to give” EA), this represents a 1-5% donation. This suggests the pledge would increase the donations of over 50% of EAs who list their primary career path as earning to give (so the argument that the mental effort needed to keep the pledge would distract from their careers doesn’t apply). Link to analysis here: https://www.facebook.com/bshlgrs/posts/10208520740630756?match=YnVjayBzaGxlZ2VyaXMsc2hsZWdlcmlzLHN1cnZleSxidWNr
Edit: Speaking for myself only, not my employer.
Some kind of anonymous survey mechanism that managed to capture people who had interacted with EA in a low-to-medium-intensity way (eg, through the Facebook group or one of the websites, through playing a giving game at a campus group, attending one meet up of a campus group, etc) and tracked a) whether they interacted at higher-intensity (eg, applying to EAG) later, and b) whether they internally felt it was welcoming.
Views my own, not my employers.
Thanks for writing this up! I agree that it could be a big win if general EA ideas besides cause prioritization (or the idea of scope-limited cause prioritization) spread to the point of being as widely accepted as environmentalism. Some alternatives to this proposal though:
It might be better to spread rationality and numeracy concepts like expected value, opportunity costs, comparative advantage, cognitive biases, etc completely unconnected to altruism than to try to explicitly spread narrow or cause-specific EA. People on average care much more about being productive, making money, having good relationships, finding meaning, etc than about their preferred altruistic causes. And it really would be a big win if they succeeded—less ambiguously so than with narrow EA I think (see Carl’s comment below). The biggest objection to this is probably crowdedness/lack of obvious low-hanging fruit.
Another alternative might be to focus on spreading the prerequisites/correlates of cause-neutral, intense EA: e.g. math education, high levels of caring/empathy, cosmopolitanism, motivation to think systematically about ethics, etc. I’m unsure how difficult this would be.
Both of these alternatives seem to have what is (to me) an advantage: they don’t involve the brand and terminology of EA. I think it would be easier to push on the frontiers of cause-neutral/broad EA if the label were a good signal of a large set of pretty unusual beliefs and attitudes, so that people can have high trust collaboration relatively quickly.
FWIW, I think I would be much more excited to evangelize broad low-level EA memes if there were some strong alternative channel to distinguish cause-neutral, super intense/obsessive EAs. Science has a very explicit distinction between science fans and scientists, and a very explicit funnel from one to the other (several years of formal education). EA doesn’t have that yet, and may never. My instinct is that we should work on building a really really great “product”, then build high and publicly-recognized walls around “practitioners” and “consumers” (a practical division of labor rather than a moral high ground thing), and then market the product hard to consumers.