There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention
Possible candidates:
We’re severely underrating tractability and importance (specifically in terms of sentience) for wild animals
We’re severely underrating neglectedness (and maybe some other criteria?) for improving data collection in LMICs
We’re severely underrating tractability and neglectedness for some category of political interventions
Something’s very off in our model of AI ethics (in the general sense, including AI welfare)
We’re severely underrating tractability of nuclear security-adjacent topics
There’s something wrong with the usual EA causes that makes them ineffective, so we get left with more normal causes
We have factually wrong beliefs about the outcome of some sort of process of major political change (communism? anarchism? world government?)
None of these strike me as super likely, but combining them all you still get an okay chance.
Possible candidates:
We’re severely underrating tractability and importance (specifically in terms of sentience) for wild animals
We’re severely underrating neglectedness (and maybe some other criteria?) for improving data collection in LMICs
We’re severely underrating tractability and neglectedness for some category of political interventions
Something’s very off in our model of AI ethics (in the general sense, including AI welfare)
We’re severely underrating tractability of nuclear security-adjacent topics
There’s something wrong with the usual EA causes that makes them ineffective, so we get left with more normal causes
We have factually wrong beliefs about the outcome of some sort of process of major political change (communism? anarchism? world government?)
None of these strike me as super likely, but combining them all you still get an okay chance.