I am a software engineer working at the Human Diagnosis Project. I am interested in ML/AI in general, and NLP in particular.
In my view the most neglected problem in the EA community is what I call the 2nd alignment problem—how to align the incentives and toolsets of large sources of influence and capital with human values.