I don’t know if this is relevant to the criticism theme, but I found it was necessary for me to take some of Hanson’s ideas seriously before becoming involved in EA, but his insistence on calling everything hypocrisy was a turn-off for me. Are there any resources on how we evolved to be such-and-such a way (interested in self+immediate family, signalling etc.) but that that’s actually a good thing because once we know that we can do better?
Giles
However, I haven’t seen a smart outside person spend a considerable amount of time to evaluating and criticising effective altruism.
Would they do it if we paid them?
Definitely. Some of the team at least are EA insiders and lurking on this very forum, and they’ll already know about TLYCS for sure.
Oh yeah, good point.
Another criticsm: the movement isn’t as transparent as you might expect. (Remember, GiveWell was originally the Clear Fund—started up not necessarily because existing charitable foundations were doing the wrong thing, but because they were too secretive).
When compiling this table of orgs’ budgets, I found that even simple financial information was difficult to obtain from organizations’ websites. I realise I can just ask them—and I will—but I’m thinking about the underlying attitude. (As always, I may be being unfair).
Also, what Leverage Research are up to is anybody’s guess.
“Giles has passed on some thoughts from a friend” is one of the things cited, so if a particular criticism isn’t listed we can assume it’s because Ryan doesn’t know about it, not that it’s inherently too low status or something. I definitely want to hear what your friends have to say!
Also, have you got in touch with the good people at Charity Science?
Great idea!
Does the pamphleting have to be done on Fridays, or can it be done on pseudo-random days? (I’m thinking about distinguishing the signal from the pamphlets from e.g. people spending more time on the Internet during weekends. Pseudo-random spikes might require fancier math to pick out though, and of course you need to remember which days you handed out pamphlets!
Can you ask people, when they take the pledge, how they found out about TLYCS? (This will provide an under-estimate, but it can be used to sanity-check other estimates). (Also it’s a bit ambiguous if someone had e.g. vaguely heard of TLYCS or Singer before, but pamphleting prompted them to actually take the pledge)
There’s a typo in your text (“require’s”) - make sure you get the pamphlets proof-read :)
Do you know in advance what you expect, in terms of:
How many pamphlets you will distribute
What the effect will be?
(Last I heard, EA was using predictionbazaar.com and predictionbook.com as its prediction markets)
Here’s the link to the Facebook group post in case people add criticisms there.
Glad you linked to Holden Karnofski’s MIRI post. Other possibly relevant posts from the GiveWell blog:
Why we can’t take expected value estimates literally even when they’re unbiased (I remember this causing a stir on LW)
There are more on a similar philosophical slant (search for “explicit expected value”) but the above seem the most criticismy.
Great topic!
I think you missed this one from Rhys Southan which is lukewarm about EA: Art is a waste of time says EA
I don’t see the Schambra piece as particularly vitriolic.
I don’t know where to find good outside critics, but I think there’s still value in internal criticism, as well as doing a good job processing the criticism we have. (I was thinking of creating a wiki page for it, but haven’t got around to it yet).
Some self-centered internal criticism; I don’t know how much this resonates with other people:
I posted some things on LW back in 2011 which were badly received (and which I’m too embarrassed to link to). This was either a problem with me, or the LW community, or more likely both
I spend a lot of time on EA social media when I could be doing more productive stuff
I feel like a standard-issue generic EA—like I’ve internalized all the memes but don’t have huge amounts of unique ideas or abilities to bring to the table
Similarly my mental model of people in the EA movement is that they’re fairly interchangeable, rather than each having their own strengths, weaknesses and personalities
In particular, I haven’t really managed to make friends with anyone I met through EA
I spend a lot of time talking about EA but haven’t actually donated much to charity yet
In the past I’ve felt strong affiliation to an EA subtribe (xrisk), viewing the poverty and animal people as outgroups
Also:
We mostly speak English and are not as ethnically diverse as we could be
One of the central premises of EA, that some charities are so very many times more effective than others, seems pretty bold. I’d like to be able to point to a mountain of evidence to back it up but I’m not sure where this is to be found.
Is it working now? I wondered why I wasn’t getting more karma ;-)
Is anybody else having problems with the image upload feature of the forum?
there’s going to be some optimal level of abstraction
I’m curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on:
http://effective-altruism.com/ea/b2/open_thread_5/1fe
Also, I know that I’d really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.
Note: I didn’t actually give this a go.
As a separate point, I’m not sure what % of unrestricted donations to GiveWell go to its own operations as opposed to being granted to its recommended charities.
You could try to convince someone who normally gives to SCI to give say $500 to CEA on condition that you’ll try really hard at running an SCI-fundraiser.
It feels more honest making the amount constant instead of depending on the total amount raised so that each marginal dollar donated results in $2 going to SCI (including matching).
I don’t know if this approach can be considered cheaty though.
I’m curious how much one-off work it would take to produce this kind of report automatically. Like Peter Hurford, I kept a spreadsheet of who I’d contacted and whether they’d responded or donated. It feels like if this were integrated into the donation platform, we could generate statistics on what percentage of people contacted responded with donations, etc. But I don’t know how valuable that level of detail is.
Another fundraiser report
I’ve been talking a lot to an EA outsider, and she offers the following opinions (which I’m expressing in my own words; she hopes to flesh this out into a blog post at some point).
1) EA is arrogant
The name “effective altruism”
The name “effective altruist”
The attitude towards most charities and their supporters
The attitude of the Friendly AI people
2) EA is individualistic. It values individual wellbeing, not (for their own sake):
Art and culture
The environment
Communities
3) EA is top-down
Orgs like GiveWell call the shots
Charities aren’t based in the countries they’re trying to help
Donors are armchair-based, don’t visit the communities they’re trying to help
4) EA promotes incremental change, not systemic change
Charity rather than activism
Part of the capitalist system; money-focussed
5) EA is somewhat one-size-fits-all
donors have particular causes that are important to them
art patrons favour particular artists; they aren’t trying to “maximize the total amount of art in existence”
6) Many consequences are hidden
If you’re a teacher, how do you know what effect you ultimately have on your students?
7) How do you assess the actual needs of the communities you’re trying to help?
Have you asked them?
8) The whole Friendly AI thing is just creepy.
If it is real, it means a tiny elite is making huge decisions for everyone without consulting people if that’s what they want
I wouldn’t write it off. These reasons may apply to a lot of people, even if they wouldn’t express them in those words. I found point 2 particularly interesting.
Thanks—most of those names ring a bell but the Selfish Gene is the only one I’ve read. I guess some of the value of reading them is gone for me now that my mind is already changed? But I’ll keep them in mind :-)