Yeah, it does sound like he might be open to fund EA causes at some point in the future.
I do think though that it is still a good criticism. There is a risk that people who would otherwise pursue some weird idiosyncratic, yet impactful, projects might be discouraged by the fact that it might be hard to justify it from a simple EA framework. I think that one potential downside risk of 80k’s work for example is that some people might end up being less impactful because they choose the “safe” EA path rather than a more unusual, risky, and, from the EA community’s perspective, low status path.
Let’s model it. Currently it seems a very vague risk. If it’s a significant risk, it seems worth considering in a way that we could find out if we were wrong.
I’d also say things like:
EAs do a lot of projects, many of which are outlandish or not obviously impactful, how does this compare to the counterfactual?
But it’s hard for me to see how, you know, writing a treatise of human nature would score really highly in an EA oriented framework. As assessed ex-post that looked like a really valuable thing for Hume to do.
Actually, there’s a lot of EAs researching philosophy and human psychology.
I think Collison’s conception of EA is something like “GiveWell charity recommendations”—this seems to be a common misunderstanding shared by most non-EA people. I didn’t check the whole interview, but it seems weird that he doesn’t account for the contrast between what he had just said about EA and his comments on x-risks and longtermism.
Yeah, it does sound like he might be open to fund EA causes at some point in the future.
I do think though that it is still a good criticism. There is a risk that people who would otherwise pursue some weird idiosyncratic, yet impactful, projects might be discouraged by the fact that it might be hard to justify it from a simple EA framework. I think that one potential downside risk of 80k’s work for example is that some people might end up being less impactful because they choose the “safe” EA path rather than a more unusual, risky, and, from the EA community’s perspective, low status path.
Let’s model it. Currently it seems a very vague risk. If it’s a significant risk, it seems worth considering in a way that we could find out if we were wrong.
I’d also say things like:
EAs do a lot of projects, many of which are outlandish or not obviously impactful, how does this compare to the counterfactual?
Actually, there’s a lot of EAs researching philosophy and human psychology.
I think Collison’s conception of EA is something like “GiveWell charity recommendations”—this seems to be a common misunderstanding shared by most non-EA people. I didn’t check the whole interview, but it seems weird that he doesn’t account for the contrast between what he had just said about EA and his comments on x-risks and longtermism.