Nice work on these day in the life posts
blehrer
The general point you’re making seems valuable.
One piece of pushback: it seems like you’re implying that success or experience ‘doing things’ = selling products / making money. I’d like to think that not all altruistic work can be aided by this type of experience. Altruism is not a consumer product.
If your post is just trying to make the point that people who have experience around X (private sector or otherwise) are better equipped to give advice about X, well then yeah, I don’t think anybody will argue with you about that.
A lot of what makes copy impactful (not in the EA sense of the word) has to do with language-specific sounds and connotations, so I don’t think I can be of much help.
That being said, I wonder if you could come up with something that plays off of the immediacy of the pandemic as a tail risk that is actually quite palpable, and how AI poses a similar risk profile.
Sorry potentially dumb question but are the slogans written in english?
Yeah if you read the essay it spends a lot of time speaking to both of those questions
tldr
The fear is born from the very DNA of EA which has its roots in avoiding emotional irrationalities that lead to ineffective forms of altruism. The culture shift I want to see is a product of a) acknowledging and relinquishing this fear when it’s not based on reality b) understanding the value proposition of good communications
One of the broader points I’m advocating for is that the middle ground is far more stable and sizable than many in the community might think it is.
I think the ‘non-serious’ individual you speak of is somewhat of a straw man. If they are real, the risk of them polluting the quality of EA’s work is quite small IMO. It’s important to make a distinction between the archetype of a follower/fan (external comms) and a worker/creator (internal comms). A lot of EAs conflate internal and external communications.
Thanks James, cool to hear.
thanks ulrik 🤝
If people want more concrete ideas they can hire me to communications work.
I don’t know how to be more concrete than I did in the article without working for free.
fixed thanks
fixed thanks
How EA can be better at communications
Thanks for the reply, Nathan.
I think EA shouldn’t want inefficient charities to end simply because it has no ability to actually make this happen. There will always be people who donate with pathos before logos, and this is something that I think EA could be better at knowing how to harness to its advantage.
Yeah I think what I’m advocating for in this post is that you might be able to do more good per dollar by shifting graph to right (if this is in fact possible) because the graph is not actually a nice even bell curve but heavily bunched in the middle, if not to the left.
Thanks for your comments, kbog!
The idea behind the post was not to advocate for spending more money on ineffective causes, at least not in the form of donations.
(Let’s go with global dev as example problem area) I think providing guidance begins to paint the picture of what I’m advocating for. But something like Vox Newsletters aren’t an adequate way to study the effectiveness of global dev. The real issue at hand is what the upside of formal organization around analyzing dev effectiveness could be, i.e. a Center for Election Science for development, or Open Phil announcing a dev Focus Area.
First and foremost, I think that there is a high upside to simply studying what the current impact of the dev sector is. This was the idea behind bringing up the orders of magnitude of difference between EA and dev earmarked capital. It’s not about deciding where a new donation goes. Nor is accurate to frame it as deciding between managing ‘$1mil in domestic versus global health’. The reality is that there is trillions of dollars locked within dev programs that often have tenuous connections to impact. Making these programs just 1% percent more efficient could have massive impact potential relative to the small amount of preexisting capital EA has at play.
The broader point behind addressing these larger capital chunks, and working directly on improving the efficiencies of mainstream problem areas is that the Overton window model of altruism suggests that people will always donate to ‘inefficient’ charities. Instead of turning away from this and forming its own bubble, EA might stand to gain a lot by addressing mainstream behaviors more directly. Shifting the curve to the right instead of building up from scratch might be easier.
The banner is really nice work!!