I somewhat agree. When I say “I’m worried about”, I don’t mean “I’m confident but using softening language” – I’m actually pretty uncertain. The meta point is that I’m worried about it and predict it would be hard to reverse.
On the object level, I’m less worried about AI safety and animal welfare so much as on the boundaries of related cause areas. For example:
1) Hardening currently fuzzy boundaries between different specialties of long-termism
2) Reducing the flow of context from object level work into the meta-EA space
3) Specialty knowledge sharing between cause areas, like outreach knowledge between farm animal welfare and global poverty
These seem like problems that one could at least largely address, but (back to the meta point) I’d expect doing so well would require at least a month’s worth of work.
I somewhat agree. When I say “I’m worried about”, I don’t mean “I’m confident but using softening language” – I’m actually pretty uncertain. The meta point is that I’m worried about it and predict it would be hard to reverse.
On the object level, I’m less worried about AI safety and animal welfare so much as on the boundaries of related cause areas. For example:
1) Hardening currently fuzzy boundaries between different specialties of long-termism
2) Reducing the flow of context from object level work into the meta-EA space
3) Specialty knowledge sharing between cause areas, like outreach knowledge between farm animal welfare and global poverty
These seem like problems that one could at least largely address, but (back to the meta point) I’d expect doing so well would require at least a month’s worth of work.