Strongly upvoted, and me too. Which sources do you have in mind? We can compare lists if you like. I’d be willing to have that conversation in private but for the record I expect it’d be better to have it in public, even if you’d only be vague about it.
It wasn’t a private group but only people need to request to join if they’re on Facebook. I agree with you though.
That’s a good idea but the post was in a private group, so I figured that might complicate things if people aren’t on Facebook or they have to join a while other group anyway before they join the conversation. I’ll do it next time though. Thanks for the suggestion.
Yeah, I’ve been thinking of updating the library but that would take enough effort I haven’t gotten around to it yet. I could get started on it whenever if I had some help. Please let me know if you or someone else you know would like to help. I might also make an EA Forum post requesting help, if you think that’d be a better idea.
Strongly upvoted. Which organizations are those?
Yeah, some parts of this discussion are more theoretical than practical and I probably should have highlighted this. Nonetheless, I think it’s easy to make the mistake of saying “We’ll never get to point X” and then end up having no idea of what to do if you actually get to point X. If the prominence of long-termism keeps growing within EA, who knows where we’ll end up?
Asking that question as a stopping point doesn’t resolve the ambiguity of which of this is theoretical vs. practical.
If the increasing prominence of long-termism like that, in terms of different kinds of resources consumed relative to short-termist efforts, is only theoretical, then the issue is one worth keeping in mind for the future. If it’s a practical concern, then, other things being equal, it could be enough of a priority that determining which specific organizations should distinguish themselves as long-termist may need to begin right now.
The decisions different parties in EA make on this subject will be the main factor determining ‘where we end up’ anyway.
I can generate a rough assessment for resources other than money of what expectations near-termism vs. long-termism is receiving and can anticipate for at least the near future. I can draft an EA Forum post for that by myself but I could co-author it with you and one or more others if you’d like.
Strongly upvoted. As I was hitting the upvote button, there was a little change in the existing karma from ‘4’ to ‘3’, which meant someone downvoted it. I don’t know why and I consider it responsible of downvoters to leave a comment as to why they’re downvoting but it doesn’t matter because I gave this comment more karma than can be taken away so easily.
Is there an assessment of how big this problem really is? How many people distributed across how many local EA groups are talking about this? Is there a proxy/measure for what impact these disputes are having?
He asked about a status quo bias favouring the world the way it is. He noticed much of the EA community appears to favour the status quo for politics, economics, etc. He presumably meant mainstream liberal/centrist positions in Western countries. To paraphrase, he is new to EA and his intuition is that if EA does more good on the margin that traditional institutions aren’t doing, advocating for what they’re already doing might defeat the purpose of maximizing marginal utility. He thought he might be missing something, which is why he asked.
Yeah, I’ve added it as an embedded link in the post now. Thanks for catching that for me. I don’t know why I forgot that.
I haven’t really thought this through in any detail, but I wonder if the EA/rationalist obsession with deeply analyzing and debating everything makes them bad at memes.This seems really plausible to me. I think I’m above average among EAs at memes.
I haven’t really thought this through in any detail, but I wonder if the EA/rationalist obsession with deeply analyzing and debating everything makes them bad at memes.
This seems really plausible to me. I think I’m above average among EAs at memes.
It’s more than really plausible. It’s definitely true. In general, effective altruists tends to suck at making memes. More than a humble opinion, it is a fact that you’re in the top half of the top decile for making memes in EA. I wouldn’t be surprised if you’re in the top percentile. It’s not hard. Most effective altruists are just not that good at making memes.
So after releasing a tentative summary of research done by a coworker and I, I thought it’d be really cool to summarize our (very long) post in a few quick memes. But every time I try to do this (and seriously, I’ve spent ~30 minutes by now across multiple false starts), I get stuck because I worry too much about the memes not conveying the appropriate level of nuance or whatever, plus I worry about seeming too irreverent and accidentally making light of some people’s life’s work, plus… :/
You should have come to me. You could have messaged any major meme-maker in dank EA memes. This wouldn’t be hard. Even with concerns with being too irreverent, the solution is to run the memes by whoever did the work first. We’ve done that in dank EA memes before with Brian Tomasik or David Denkenberger.
I’ve been reading the comments in this thread but this one convinces me it’d be worthwhile to do a review of the impact of dank effective altruism memes.
EA as a community is pretty publicly sober-minded. I imagine that turns some people off.
It might be better for EA to turn some people off, if they’re not the kind who cares to have sufficient standards for effectiveness. Also, as an admin for Dank EA Memes, I attest that EA could easily have a more vibrant meme culture if things changed to make that the better option. It’s not clear to me which way is the better way to go. I’m not going to dignify the notion r/neoliberal memes are danker than EA memes with a response unless neoliberals start showing insisting on it, in which case I will likely respond with hostility.
I love it, but I always figured it was private for a reason—EA is full of lots of counterintuitive philosophical ideas that people find off-putting (like… utilitarianism alone is already off-putting to most normies), and EA seems to be very obsessed with having a good/prestigious reputation as a responsible, serious movement. Our jokes are mostly about how weird EA is, so we might want to keep our jokes to ourselves if we are desperately trying to seem normal to everyone else.
As an admin for that group, I can confirm that’s why the group has been private.
Our jokes are mostly about how weird EA is, so we might want to keep our jokes to ourselves if we are desperately trying to seem normal to everyone else.
We aren’t trying to desperately seem normal to everyone else. We should ’t try to be weird and we should probably try to fit in with mainstream society in some crucial ways but if our attempts to appear normal can be described as “desperate,” they’re probably an over-correction.
We could start making fun of ordinary charities like the Red Cross and Salvation Army, but I doubt that would go over well
One of the Salvation Army’s slogan is also “doing the most good,” and yes, that is really true, so that’s made for some great memes. Otherwise, yes, memes like this have mostly been taken to have been made in poor taste.
I’m not sure about that theory; hopefully there is some way we can figure out how to harness meme magic.
This has already been accomplished in multiple ways. Since it was launched almost seven years ago, among other achievements, a few hundred thousand dollars have been counterfactually donated to EA-prioritized causes through that group. I’ve thought of doing a write-up about it but I’ve not gotten around to it. I’d do that write-up if enough people thought it’d be valuable.
That has always been a strawman of what earning to give is about. My opinion is that at this point it’s better for EA to assert itself in the face of misrepresentations instead of trying to defuse them through conciliatory dialogue that has never worked.
It doesn’t seem necessary to do it. In this comment, I went over how major mistakes with how EA branded itself in its first few years were in hindsight very bad optics because that resulted in major public misconceptions about what EA is about and what it’s really effective for those in EA to do, e.g., with their careers.
Summary: It was bad optics for EA to associate itself with memes that misrepresent what the movement is really about. It was mistaken branding efforts in the first few years of EA that has gotten EA stuck with these inaccurate interpretations about the movement.
It’s both. Common misconceptions about EA are not only inaccurate presentations of what EA is about. They’re the consequence of EA misrepresenting itself. That’s why it was bad optics.
The impression I’ve gotten from other comments on this post is that people aren’t very aware that the these misconceptions about EA were caused by EA branding itself with memes like the one Hauke Hillebrandt references in this comment.
I don’t know if it’s because most people have joined the EA movement before it got stuck with these misconceptions.
Yet I’ve participated in EA for a decade and I remember for the first few years we associated ourselves with earning to give and overly simplistic utilitarian (pseudo-utilitarian?) approaches.
I made that mistake a lot. It’s hard to overstate how much we, the first ‘cohort’ of EA, made that mistake. (Linch, I’m aware you’ve been in EA for a long time too but I don’t mean to imply you’re part of that first cohort or whatever.) It took only a few years for us to fully recognize we were making these mistakes and attempt to rectify so many misconceptions about EA. Yet a decade later we’re still stuck with them.
I’m aware a problem with AI risk or AI safety is that it doesn’t distinguish other AI-related ethics or security concerns from the AI alignment problem, as the EA community’s primary concern about advanced AI. I got interesting answers to a question I recently asked on LessWrong about who else has this same attitude towards this kind of conceptual language.