Head of Communications at the Centre for Effective Altruism. Previously: News Editor at The Economist; journalist and growth manager at Protocol; journalist at Finimize.
Shakeel Hashim
EA Wins 2023
Here’s where CEA staff are donating in 2023
This is really cool, thanks for organising it!
GiveWell’s previously recommended MSF as a good disaster relief org, so that would be my best guess. I’d love to know more, though.
“is there no EA press or comms unit that journalists contact before publishing such articles” — sometimes CEA or Forethought get asked for comment on pieces, but the vast majority of the time no one contacts us. It’s quite frustrating.
Yeah, the phrase “woke mob” (and similar) is extremely common in conservative media!
I don’t have an answer, but would suggest you talk to the folks at the Good Food Institute if you haven’t already — they might have advice, or at the very least be able to point you towards other people you could ask about this.
This is great, thanks for highlighting. Evidence Action is another excellent charity that’s nominated, here’s the link to vote for them: https://charitynavigator.typeform.com/to/PsmPZTwp#organization=Evidence Action Inc.
Thanks for this. I agree that we’ve been neglecting social media; the main reason for this as far as I can tell is that no one at CEA was primarily focused on comms/marketing until I was hired in September; then other events proved to be attention-stealing.
Social media is going to be a major part of the communications strategy I outlined here; I expect you’ll see us being more active in the coming months. https://forum.effectivealtruism.org/posts/mFGZtPKTjqrfeHHsH/how-cea-s-communications-team-is-thinking-about-ea
This is interesting and I broadly agree with you (though I think Habryka’s comment is important and right). On point 2, I’d want us to think very hard before adopting these as principles. It’s not obvious to me that non-violence is always the correct option — e.g. in World War 2 I think violence against the Nazis was a moral course of action.
As EA becomes increasingly involved in campaigning for states to act one way or another, a blanket non-violence policy could meaningfully and harmfully constrain us. (You could amend the clause to be “no non-state-sanctioned violence” but even then you’re in difficult territory — were the French resistance wrong to take up arms?)
I think there are similar issues with the honesty clause, too — it just isn’t the case that being honest is always the moral course of action (e.g. the lying to the Nazis about Jews in your basement example).
These are of course edge cases, and I do believe that in ~99% of cases one should be honest and non-violent. But formalising that into a core value of EA is hard, and I’m not sure it’d actually do much because basically everyone agrees that e.g. honesty is important; when they’re dishonest they just think (often incorrectly!) that they’re operating in one of those edge cases.
Thanks for this post, it’s a really important issue. On tractability, do you think we’ll be best off with technical fixes (e.g. maybe we should just try not to make sentient AIs?), or will it have to be policy? (Maybe it’s way too early to even begin to guess).
Makes total sense — thank you, and looking forward to the handbook!
This is really exciting, nice work on putting it together. Do you have any plans to put the teaching materials (even if that’s just a reading list) online at any point? I think I’m not the right sort of person to do the course but I’d love to slowly work my way through a reading list in my own time.
I think this is interesting but don’t think this is as clear cut as you’re making out. There seem to me to be some instances where making the “first strike” is good — e.g. I think it’d be reasonable (though maybe not advisable) to criticise a billionaire for not donating any of their wealth; to criticise an AI company that’s recklessly advancing capabilities; to criticise a virology lab that has unacceptably lax safety standards; or to criticise a Western government that is spending no money on foreign aid. Maybe your “personal attack” clause means this kind of stuff wouldn’t get covered, though?
Great question, to which I don’t have a simple answer. I think I agree with a lot of what Sjir said here. I think claims 2 and 4 are particularly important — I’d like the effective giving community to grow as its own thing, without all the baggage of EA, and I’m excited to see GWWC working to make that happen. That doesn’t mean that in our promotion of EA we won’t discuss giving at all, though, because giving is definitely a part of EA. I’m not entirely sure yet how we’ll talk about it, but one thing I imagine is that giving will be included as a call-to-action in much of our content.
Really great post, thanks for writing this! EA’s animal successes are indeed really impressive. I want to push back a bit on “no one cares about” this though. The “good things” forum post and Twitter thread I did back in December both did well; much of EAG programming is about wins; Animal Liberation Now, which has got a ton of attention, contains a whole chapter on progress in animal welfare; and indeed your own post got a ton of upvotes.
I do agree that we could always do more to celebrate and reflect on wins like this — I’m just pushing back because I think saying “no one cares about” can actually perpetuate the negative environment it’s trying to fight.
Definitely agreed that we need to showcase the action — hence my mention of “real-world impact and innovation” (and my examples of LEEP and far-UVC work as the kinds of things we’re very excited to promote).
Sorry that you’re struggling to find something here! I don’t have any great ideas, but some stuff that might be promising avenues to explore:
Tobacco control and taxation in LMICs (https://forum.effectivealtruism.org/posts/RRm8vnmwjWK24ung2/taxing-tobacco-the-intervention-that-got-away-happy-world-no and https://www.openphilanthropy.org/research/tobacco-control/) -- relatedly, alcohol policy https://www.givewell.org/research/grants/RESET-alcohol-December-2021
Telecoms and mobile money (https://www.openphilanthropy.org/research/telecommunications-in-lmics/ and https://forum.effectivealtruism.org/posts/vjysioCANWNXFKipq/the-impact-of-mobile-phones-and-mobile-money-for-people-in)
Cash transfers https://forum.effectivealtruism.org/posts/acBFLTsRw3fqa8WWr/large-study-examining-the-effects-of-cash-transfer-programs
You might also want to look at Charity Entrepreneurship’s research: https://www.charityentrepreneurship.com/research (road safety could be interesting?). Best of luck!!
As someone who works on comms stuff, I struggle with this a lot too! One thing I’ve found helpful is just asking decision makers, or people close to decision makers, why they did something. It’s imperfect, but often helpful — e.g. when I’ve asked DC people what catalysed the increased political interest in AI safety, they overwhelmingly cited the CAIS letter, which seems like a fairly good sign that it worked. (Similarly, I’ve heard from people that Ian Hogarth’s FT article may have achieved a similar effect in the UK.)
There are also proxies that can be kind of useful — if an article is incredibly widely read, and is the main topic on certain corners of Twitter for the day, and then the policy suggestions from that article end up happening, it’s probably at least in part because of the article. If readership/discussion was low, you’re probably not the cause.