Let’s model it. Currently it seems a very vague risk. If it’s a significant risk, it seems worth considering in a way that we could find out if we were wrong.
I’d also say things like:
EAs do a lot of projects, many of which are outlandish or not obviously impactful, how does this compare to the counterfactual?
But it’s hard for me to see how, you know, writing a treatise of human nature would score really highly in an EA oriented framework. As assessed ex-post that looked like a really valuable thing for Hume to do.
Actually, there’s a lot of EAs researching philosophy and human psychology.
I think Collison’s conception of EA is something like “GiveWell charity recommendations”—this seems to be a common misunderstanding shared by most non-EA people. I didn’t check the whole interview, but it seems weird that he doesn’t account for the contrast between what he had just said about EA and his comments on x-risks and longtermism.
Let’s model it. Currently it seems a very vague risk. If it’s a significant risk, it seems worth considering in a way that we could find out if we were wrong.
I’d also say things like:
EAs do a lot of projects, many of which are outlandish or not obviously impactful, how does this compare to the counterfactual?
Actually, there’s a lot of EAs researching philosophy and human psychology.
I think Collison’s conception of EA is something like “GiveWell charity recommendations”—this seems to be a common misunderstanding shared by most non-EA people. I didn’t check the whole interview, but it seems weird that he doesn’t account for the contrast between what he had just said about EA and his comments on x-risks and longtermism.