sub-forums would cause the specializations to ossify and remove valuable cross-pollination of ideas
I really don’t think this is a problem, I think there is already so much cross-pollination between people in different unrelated cause-areas within EA at EA Globals, other EA events, friendship groups created by these events, facebook groups, etc. And I’m not sure it’s useful, there’s too little overlap between topics like AI safety and animal welfare.
I somewhat agree. When I say “I’m worried about”, I don’t mean “I’m confident but using softening language” – I’m actually pretty uncertain. The meta point is that I’m worried about it and predict it would be hard to reverse.
On the object level, I’m less worried about AI safety and animal welfare so much as on the boundaries of related cause areas. For example:
1) Hardening currently fuzzy boundaries between different specialties of long-termism
2) Reducing the flow of context from object level work into the meta-EA space
3) Specialty knowledge sharing between cause areas, like outreach knowledge between farm animal welfare and global poverty
These seem like problems that one could at least largely address, but (back to the meta point) I’d expect doing so well would require at least a month’s worth of work.
I really don’t think this is a problem, I think there is already so much cross-pollination between people in different unrelated cause-areas within EA at EA Globals, other EA events, friendship groups created by these events, facebook groups, etc. And I’m not sure it’s useful, there’s too little overlap between topics like AI safety and animal welfare.
I somewhat agree. When I say “I’m worried about”, I don’t mean “I’m confident but using softening language” – I’m actually pretty uncertain. The meta point is that I’m worried about it and predict it would be hard to reverse.
On the object level, I’m less worried about AI safety and animal welfare so much as on the boundaries of related cause areas. For example:
1) Hardening currently fuzzy boundaries between different specialties of long-termism
2) Reducing the flow of context from object level work into the meta-EA space
3) Specialty knowledge sharing between cause areas, like outreach knowledge between farm animal welfare and global poverty
These seem like problems that one could at least largely address, but (back to the meta point) I’d expect doing so well would require at least a month’s worth of work.