I agree that some of the points I listed could have been better framed.
In terms of why CEA, I guess I see it as a core function of CEA to try to ensure that members of the EA community have access to the information that they need in order to make up their own mind on how to have as great an impact as possible. I don’t necessarily think that CEA should always follow, I think it’s okay for it to lead as well, but if it were to run a course and most people who took it didn’t find that it was helping them develop their views, then I would see that as a failure.
Regarding belonging, I don’t see that as the primary thing that CEA should optimise for, particularly when it comes at the expense of epistemics. It’s worth thinking through how to frame things in order to ensure as much belonging as possible, which is part of why I was suggesting a course that would cover considerations relevant to people from various cause areas, but it isn’t the number one priority.
I agree that if they went with the plan of yearly topics (I wasn’t suggesting a rotation, but rather that the topic would be different every year unless exceptional circumstances caused us to repeat a topic) it would require significant resources. On the other hand, I believe that it would be well worth it in order to significantly reduce the chance of intellectual stagnation within the community.
It’s certainly possible that a version of this content could address the belongingness concerns I identified.
About belongingness more generally: When the question of splitting up EA (e.g., into neartermist and longtermist branches) has arisen, people have generally been opposed. But I think a consequence of that position is that certain central organizations need to reflect a rough balance of different cause areas and neartermist/longtermist perspective within the movement. Stated differently, I don’t think it is plausible for both of the following conditions to be true: “CEA is a broad-based organization for promoting effective altruism” and “CEA clearly gives the impression that certain key methodologies, cause areas, or philosophical views that are prominent within the community are second-rate.” There are arguments for giving up the first statement to free CEA from the constraints it imposes, but they do impose costs. In my view, any argument that “CEA should do X,” where X creates risks of causing disunity, needs to acknowledge the downsides and explain why the marginal benefit of housing the work at CEA outweighs them.
As far as epistemics, I tend to prefer decentralized epistemic institutions to the extent practicable. Maybe that’s a bias from my professional training (as a lawyer), but in general I’d rather have a robust epistemic marketplace in which almost everyone can promote their ideas without having to compromise on belongingness grounds, rather than setting up CEA (or any similar organization) as promoter of views that do not reflect broad community consensus. EAs, EA-adjacent people, and EA-interested people can evaluate epistemic claims for themselves, and centralizing epistemics creates the usual risks of any system with a single point of failure.
I agree that some of the points I listed could have been better framed.
In terms of why CEA, I guess I see it as a core function of CEA to try to ensure that members of the EA community have access to the information that they need in order to make up their own mind on how to have as great an impact as possible. I don’t necessarily think that CEA should always follow, I think it’s okay for it to lead as well, but if it were to run a course and most people who took it didn’t find that it was helping them develop their views, then I would see that as a failure.
Regarding belonging, I don’t see that as the primary thing that CEA should optimise for, particularly when it comes at the expense of epistemics. It’s worth thinking through how to frame things in order to ensure as much belonging as possible, which is part of why I was suggesting a course that would cover considerations relevant to people from various cause areas, but it isn’t the number one priority.
I agree that if they went with the plan of yearly topics (I wasn’t suggesting a rotation, but rather that the topic would be different every year unless exceptional circumstances caused us to repeat a topic) it would require significant resources. On the other hand, I believe that it would be well worth it in order to significantly reduce the chance of intellectual stagnation within the community.
It’s certainly possible that a version of this content could address the belongingness concerns I identified.
About belongingness more generally: When the question of splitting up EA (e.g., into neartermist and longtermist branches) has arisen, people have generally been opposed. But I think a consequence of that position is that certain central organizations need to reflect a rough balance of different cause areas and neartermist/longtermist perspective within the movement. Stated differently, I don’t think it is plausible for both of the following conditions to be true: “CEA is a broad-based organization for promoting effective altruism” and “CEA clearly gives the impression that certain key methodologies, cause areas, or philosophical views that are prominent within the community are second-rate.” There are arguments for giving up the first statement to free CEA from the constraints it imposes, but they do impose costs. In my view, any argument that “CEA should do X,” where X creates risks of causing disunity, needs to acknowledge the downsides and explain why the marginal benefit of housing the work at CEA outweighs them.
As far as epistemics, I tend to prefer decentralized epistemic institutions to the extent practicable. Maybe that’s a bias from my professional training (as a lawyer), but in general I’d rather have a robust epistemic marketplace in which almost everyone can promote their ideas without having to compromise on belongingness grounds, rather than setting up CEA (or any similar organization) as promoter of views that do not reflect broad community consensus. EAs, EA-adjacent people, and EA-interested people can evaluate epistemic claims for themselves, and centralizing epistemics creates the usual risks of any system with a single point of failure.