But I sometimes have a fear in the back of my mind that some of the attendees who are intrigued by these ideas are later going to look up effective altruism, get the impression that the movement’s focus is just about existential risks these days, and feel duped. Since EA pitches don’t usually start with longtermist ideas, it can feel like a bait and switch.
Do you have any evidence that this is happening? Feeling duped just seems like a bit of a stretch here. Animal welfare and global health still make up a large part of the effectivealtrusim.org home page. Givewell still ranks their top charities and their funds raised more than doubled from 2020 to 2021.
Assuming someone did feel duped, what are the range of results?
Perhaps they would get over it and read more about longtermisim and the feeling would subside.
Perhaps they wouldn’t and they’d just stick with whatever effective cause they would have otherwise donated to.
Perhaps some combination of the two over time (this was pretty much my trajectory).
Perhaps they’d think “These people are crazy, I’m gonna keep giving playpumps”
Kidding aside, the latter possibility seems like the least likely to me, and anyone in that bucket seems like a pretty bad candidate for EA in general.
Anecdotally, yes. My partner who proofread my piece left this comment around what I wrote here: “Hit the nail on the head. This is literally how I experienced coming in via effective giving/global poverty calls to action. It wasn’t long till I got bait-and-switched and told that this improvement I just made is actually pointless in the grand scheme of things. You might not get me on board with extinction prevention initiatives, but I’m happy about my charity contributions.”
The comment I linked to explains well why many can come away with the impression that EA is just about longtermism these days.
LOL. I wonder how much of this is the red-teaming contest. While I see the value in it, the forum will be a lot more readable once that and the cause exploration contest are over.
I guess we can swap anecdotes. I came to EA for the Givewell top charities, a bit after that Vox article was written. It took me several years to come around on the longtermism/x-risk stuff, but I never felt duped or bait-and-switched. Cause neutrality is a super important part of EA to me and I think that naturally leads to exploring the weirder/more unconventional ideas.
Using terms like dupe and bait and switch also implies that something has been taken away, which is clearly not the case. There is a lot of longtermist/x-risk content these days, but there is still plenty going on with donations and global poverty. More money than ever is being moved to Givewell top charities (don’t have the time to look it up, but I would be surprised if the same wasn’t also true of EA animal welfare) and (from memory) the last EA survey showed a majority of EAs consider global health and wellbeing their top cause area.
I hadn’t heard the “rounding error” comment before (and don’t agree with it), but before I read the article, I was expecting that the author would have made that claim, and was a bit surprised he was just reporting having heard it from “multiple attendees” at EAG—no more context than that. The article gets more mileage out of that anonymous quote than really seems warranted—the whole thing left me with a bit of a clickbait-y/icky feeling. FWIW, the author also now says about it, “I was wrong, and I was wrong for a silly reason...”
In any case, I am glad your partner is happy with their charity contributions. If that’s what they get out of EA, I wouldn’t at all consider that being filtered out. Their donations are doing a lot of good! I think many come to EA and stop with that, and that’s fine. Some, like me, may eventually come around on ideas they didn’t initially find convincing. To me that seems like exactly how it should work.
There’s a sampling bias problem here. The EAs who are in the movement, and the people EAs are likely to encounter, are the people who weren’t filtered out of the movement. One could sample EAs, find a whole bunch of people who aren’t into longtermism but weren’t filtered out, and declare that the filter effect isn’t a problem. But that wouldn’t take into account all the people who were filtered out, because counting them is much harder.
In the absence of being able to do that, this is how I explained my reasoning about this:
I have met various effective altruists who care about fighting global poverty, and maybe care about improving animal welfare, but who are not sold on longtermism (and are sometimes hostile to portions of it, usually to concerns about AI). In their cases, their appreciation for what they consider to be the good parts of EA outweighed their skepticism of longtermism, and they become part of the movement. It would be very surprising if there weren’t others who are in a similar boat, except being somewhat more averse to longtermism and somewhat less appreciative of the rest of the EA, the balance swings the other way and they avoid the movement altogether.
Sorry for the delay. Yes this seems like the crux.
It would be very surprising if there weren’t others who are in a similar boat, except being somewhat more averse to longtermism and somewhat less appreciative of the rest of the EA, the balance swings the other way and they avoid the movement altogether.
As you pointed out, there’s not much evidence either way. Your intuitions tell you that there must be a lot of these people, but mine say the opposite. If someone likes the Givewell recommendations, for example, but is averse to longtermism and less appreciative of the other aspects of EA, I don’t see why they wouldn’t just use Givewell for their charity recommendations and ignore the rest, rather than avoiding the movement altogether. If these people are indeed “less appreciative of the rest of EA”, they don’t seem likely to contribute much to a hypothetical EA sans longtermism either.
Further, it seems to me that renaming/dividing up the community is a huge endeavor, with lots of costs. Not the kind of thing one should undertake without pretty good evidence that it is going to be worth it.
One last point, for those of us who have bought in to the longtermist/x-risk stuff, there is the added benefit that many people who come to EA for effective giving, etc. (including many of the movement’s founders) eventually do come around on those ideas. If you aren’t convinced, you probably see that as somewhere on the scale of negative to neutral.
All that said, I don’t see why your chapter at Microsoft has to have Effective Altruism in the name. It could just as easily be called Effective Giving if that’s what you’d like it to focus on. It could emphasize that many of the arguments/evidence for it come from EA, but EA is something broader.
I agree it’d be good do rigorous analyses/estimations on what the costs vs benefits to global poverty and animal welfare causes are from being under the same movement as longtermism. If anyone wants to do this, I’d be happy to help brainstorm ideas on how it can be done.
I responded to the point about longtermism benefiting from its association with effective giving in another comment.
Do you have any evidence that this is happening? Feeling duped just seems like a bit of a stretch here. Animal welfare and global health still make up a large part of the effectivealtrusim.org home page. Givewell still ranks their top charities and their funds raised more than doubled from 2020 to 2021.
My impression is that the movement does a pretty good job explaining that caring about the future doesn’t mean ignoring the present.
Assuming someone did feel duped, what are the range of results?
Perhaps they would get over it and read more about longtermisim and the feeling would subside.
Perhaps they wouldn’t and they’d just stick with whatever effective cause they would have otherwise donated to.
Perhaps some combination of the two over time (this was pretty much my trajectory).
Perhaps they’d think “These people are crazy, I’m gonna keep giving playpumps”
Kidding aside, the latter possibility seems like the least likely to me, and anyone in that bucket seems like a pretty bad candidate for EA in general.
Anecdotally, yes. My partner who proofread my piece left this comment around what I wrote here: “Hit the nail on the head. This is literally how I experienced coming in via effective giving/global poverty calls to action. It wasn’t long till I got bait-and-switched and told that this improvement I just made is actually pointless in the grand scheme of things. You might not get me on board with extinction prevention initiatives, but I’m happy about my charity contributions.”
The comment I linked to explains well why many can come away with the impression that EA is just about longtermism these days.
My impression is that about half of EA is ppl complaining that ‘EA is just longtermism these days’ :)
LOL. I wonder how much of this is the red-teaming contest. While I see the value in it, the forum will be a lot more readable once that and the cause exploration contest are over.
I guess we can swap anecdotes. I came to EA for the Givewell top charities, a bit after that Vox article was written. It took me several years to come around on the longtermism/x-risk stuff, but I never felt duped or bait-and-switched. Cause neutrality is a super important part of EA to me and I think that naturally leads to exploring the weirder/more unconventional ideas.
Using terms like dupe and bait and switch also implies that something has been taken away, which is clearly not the case. There is a lot of longtermist/x-risk content these days, but there is still plenty going on with donations and global poverty. More money than ever is being moved to Givewell top charities (don’t have the time to look it up, but I would be surprised if the same wasn’t also true of EA animal welfare) and (from memory) the last EA survey showed a majority of EAs consider global health and wellbeing their top cause area.
I hadn’t heard the “rounding error” comment before (and don’t agree with it), but before I read the article, I was expecting that the author would have made that claim, and was a bit surprised he was just reporting having heard it from “multiple attendees” at EAG—no more context than that. The article gets more mileage out of that anonymous quote than really seems warranted—the whole thing left me with a bit of a clickbait-y/icky feeling. FWIW, the author also now says about it, “I was wrong, and I was wrong for a silly reason...”
In any case, I am glad your partner is happy with their charity contributions. If that’s what they get out of EA, I wouldn’t at all consider that being filtered out. Their donations are doing a lot of good! I think many come to EA and stop with that, and that’s fine. Some, like me, may eventually come around on ideas they didn’t initially find convincing. To me that seems like exactly how it should work.
There’s a sampling bias problem here. The EAs who are in the movement, and the people EAs are likely to encounter, are the people who weren’t filtered out of the movement. One could sample EAs, find a whole bunch of people who aren’t into longtermism but weren’t filtered out, and declare that the filter effect isn’t a problem. But that wouldn’t take into account all the people who were filtered out, because counting them is much harder.
In the absence of being able to do that, this is how I explained my reasoning about this:
Sorry for the delay. Yes this seems like the crux.
As you pointed out, there’s not much evidence either way. Your intuitions tell you that there must be a lot of these people, but mine say the opposite. If someone likes the Givewell recommendations, for example, but is averse to longtermism and less appreciative of the other aspects of EA, I don’t see why they wouldn’t just use Givewell for their charity recommendations and ignore the rest, rather than avoiding the movement altogether. If these people are indeed “less appreciative of the rest of EA”, they don’t seem likely to contribute much to a hypothetical EA sans longtermism either.
Further, it seems to me that renaming/dividing up the community is a huge endeavor, with lots of costs. Not the kind of thing one should undertake without pretty good evidence that it is going to be worth it.
One last point, for those of us who have bought in to the longtermist/x-risk stuff, there is the added benefit that many people who come to EA for effective giving, etc. (including many of the movement’s founders) eventually do come around on those ideas. If you aren’t convinced, you probably see that as somewhere on the scale of negative to neutral.
All that said, I don’t see why your chapter at Microsoft has to have Effective Altruism in the name. It could just as easily be called Effective Giving if that’s what you’d like it to focus on. It could emphasize that many of the arguments/evidence for it come from EA, but EA is something broader.
I agree it’d be good do rigorous analyses/estimations on what the costs vs benefits to global poverty and animal welfare causes are from being under the same movement as longtermism. If anyone wants to do this, I’d be happy to help brainstorm ideas on how it can be done.
I responded to the point about longtermism benefiting from its association with effective giving in another comment.