Kudos for bringing this up, I think it’s an important area!
Do we, as a community, sometimes lean more toward unconsciously advocating specific outcomes rather than encouraging people to discover their own conclusions through the EA framework?
There’s a lot to this question.
I think that many prestigious/important EAs have come to similar conclusions. If you’ve come to think that X is important, it can seem very reasonable to focus on promoting and working with people to improve X.
You’ll see some discussions of “growing the tent”—this can often mean “partnering with groups that agree with the conclusions, not necessarily with the principles”.
One question here is something like, “How effective is it to spend dedicated effort on explorations that follow the EA principles, instead of just optimizing for the best-considered conclusions?” This is something that arguably there would need to be more dedicated effort in order to really highlight. I think we just don’t have all too much work in this area now, compared to more object-level work.
Perhaps another factor seemed to have been that FTX has stained the reputation of EA and hurt CEA—after which there was a period where there seemed to be less attention on EA, and more on specific causes like AI safety.
In terms of “What should the EA community do”, I’d flag that a lot of the decisions are really made by funders and high-level leaders. It’s not super clear to me how much agency the “EA community” has, in ways that aren’t very aligned with these groups.
All that said, I think it’s easy for us to generally be positive towards people who take the principles in ways that don’t match the specific current conclusions.
I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.
Thanks for the answer, and for splitting the issue into several parts, it really makes some things clearer in my mind! I’ll keep thinking about it (and take a look at your posts, you seem to have spent quite some time thinking about meta EA, I realize there might be a lot of past discussions to catch up on before I start looking for a solution by myself!)
Kudos for bringing this up, I think it’s an important area!
There’s a lot to this question.
I think that many prestigious/important EAs have come to similar conclusions. If you’ve come to think that X is important, it can seem very reasonable to focus on promoting and working with people to improve X.
You’ll see some discussions of “growing the tent”—this can often mean “partnering with groups that agree with the conclusions, not necessarily with the principles”.
One question here is something like, “How effective is it to spend dedicated effort on explorations that follow the EA principles, instead of just optimizing for the best-considered conclusions?” This is something that arguably there would need to be more dedicated effort in order to really highlight. I think we just don’t have all too much work in this area now, compared to more object-level work.
Perhaps another factor seemed to have been that FTX has stained the reputation of EA and hurt CEA—after which there was a period where there seemed to be less attention on EA, and more on specific causes like AI safety.
In terms of “What should the EA community do”, I’d flag that a lot of the decisions are really made by funders and high-level leaders. It’s not super clear to me how much agency the “EA community” has, in ways that aren’t very aligned with these groups.
All that said, I think it’s easy for us to generally be positive towards people who take the principles in ways that don’t match the specific current conclusions.
I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.
Thanks for the answer, and for splitting the issue into several parts, it really makes some things clearer in my mind!
I’ll keep thinking about it (and take a look at your posts, you seem to have spent quite some time thinking about meta EA, I realize there might be a lot of past discussions to catch up on before I start looking for a solution by myself!)
Can you give specifics? Any crucial considerations that EA is not considering or under-weighting?