Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Some (fairly minor) points, given I generally agree:
1) A critic taking the second option doesn’t need to say “We will never discover any pond-like acts”, but something like “The likelihood of such a discovery is sufficiently low (either because in fact such acts are rare or because whether or not they are there we cannot expect good access to them)”. The bare possibility we might discover a pond-like act in the future doesn’t make EA worth one’s attention.
2) I am hesitant to make the move that all criticism of EA ‘as practiced’ is inapposite. For caricature, if most EAs decided they’d form a spree-killing ring, a reply along the lines of that this mere concretum bears no relevance to the platonic ideal of EA (alas poorly instantiated in this case) doesn’t seem to cut it. If EA is generally going wrong so badly its worse than the counterfactual, this seems entirely fair to criticise (I agree this looks unlikely by the lights of any sensible moral view).
It also seems fair to criticise EA if a substantial minority are doing something you deem stupid (e.g. “Look at those muppets who think giving to MIRI 10^ridiculous times more important than stopping kids starving”). If I think some significant subset of people who believe X are doing (because of said belief) something silly or objectionable, it seems fair to have it as a black mark against X, even if it doesn’t mean I think it makes them bad all things considered: “Yeah EA is good when it gets people giving more to charity—it’s a shame it seems to lead people up the garden path to believe ridiculous stuff like killer robots and what-not”. (N.B. I picked AI risk as it hits the ‘unsweet spot’ of being fairly popular in EA yet pretty outlandish outside it—these are not criticisms I endorse myself).
1) I agree—I was speaking loosely.
2) I may have misunderstood, but I think these would fall under the third way of criticising EA I mentioned:
But such a critique also falls under the second kind of critique that you said would be a “misfire”. Perhaps you meant that it’s a misfire only if the critic is trying to argue against ideal EA, but in my experience most critics are not trying to do that, they’re arguing against the EA movement.
I’d like to steelman a slightly more nuanced criticism of Effective Altruism. It’s one that, as Effective Altruists, we might tend to dismiss (as do I), but non-EAs see it as a valid criticism, and that matters.
Despite efforts, many still see Effective Altruism as missing the underlying causes of major problems, like poverty. Because EA has tended to focus on what many call ‘working within the system’, a lot of people assume that is what EA explicitly promotes. If I thought there was a movement which said something like, ‘you can solve all the world’s problems by donating enough’, I might have reservations too. They worry that EA does not pay enough credence to the value of building community and social ties.
Of course, articles like this (https://80000hours.org/2015/07/effective-altruists-love-systemic-change/) have been written, but it seems this is still being overlooked. I’m not arguing we should necessarily spend more time trying to convince people that EAs love systemic change, but it’s important to recognise that many people have, what sounds to them, like totally rational criticisms.
Take this criticism (https://probonoaustralia.com.au/news/2015/07/why-peter-singer-is-wrong-about-effective-altruism/ - which I responded to here: https://probonoaustralia.com.au/news/2016/09/effective-altruism-changing-think-charity/). Even after addressing the author’s concerns about EA focusing entirely on donating, he still contacted me with concerns that EA is going to miss the unintended consequences of reducing community ties. I disagree with the claim, but this makes sense given his understanding of EA.
I read through your article, but let me see if I can strengthen the claim that charities promoted by effective altruism do not actually make systematic change. Remember, effective altruists should care about the outcomes of their work, not the intentions. It does not matter if effective altruists love systematic change, if that change fails to occur, the actions they did are not in the spirit of effective altruism. Simply put, charities such as the Against Malaria Foundation harm economic growth, limit freedom, and instill dependency, all while attempting to stop a disease which kills about as many people every year as the flu. Here’s the full video
The link to your argument regarding international aid is broken, so I’ll post this here. While I am all for effective altruism in principle, the claim that the particular aid organizations that Give Well, and other promote do the most good is patently false. I live and work in West Africa and I see every day the devastating economic harm that organizations like the Against Malaria Foundation wreak on communities. Effective Altruism as a movement has failed to actually be effective because it promote charities that do more harm than good. Here’s a video as to why: Stop Giving Well
Make a series of videos about that instead then if it’s so prevalent. It would serve to undermine GiveWell far more and strengthen your credibility.
Your video against GiveWell does not address or debunk any of GiveWell’s evidence. It’s a philosophical treatise on GiveWell’s methods not an evidence-based treatise. Arguing by analogy based on your own experience is not evidence. I’ve been robbed 3 times living in Vancouver and yet zero times in Africa, despite living in Namibia/South Africa for most of my life. This does not however entail that Vancouver is more dangerous. I in fact have near-zero evidence to back up the claim that Vancouver is more dangerous
All of your methodology objections (and far stronger anti-EA arguments) were systematically raised in Iason Gabriel’s piece on criticisms of effective altruism. And all of these criticisms were systematically responded to and found lacking by Halstead et al’s defense paper
I’d highly recommend reading both of these. They are both pretty bad ass.