This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health.
I absolutely agree that those issues are very neglected, but only among the general population. They’re not at all neglected within EA. Specifically, the question we should be asking isn’t “do people care enough about this”, but “how far will my marginal dollar go?”
To answer that latter question, it’s not enough to highlight the importance of the issue, you would have to argue that:
There are longtermist organizations that are currently funding-constrained,
Such that more funding would enable them to do more or better work,
And this funding can’t be met by existing large EA philanthropists.
This is a good illustration of how tractability has been neglected by longtermists. Benjamin is only thinking in terms of importance and crowdedness, and not incorporating tractability.
This is a good illustration of how tractability has been neglected by longtermists. Benjamin is only thinking in terms of importance and crowdedness, and not incorporating tractability.