In my own work now, I feel much more personally comfortable leaning into cause area-specific field building, and groups that focus around a project or problem. These are much more manageable commitments, and can exemplify the EA lens of looking at a project without it being a personal identity.
The absolute strongest answer to most critiques or problems that has been mentioned recently is—strong object level work.
If EA has the best leaders, the best projects and the most success in executing genuinely altruistic work, especially in a broad range of cause areas, that is a complete and total answer to:
“Too much” spending
billionaire funding/asking people to donate income
most “epistemic issues”, especially with success in multiple cause areas
If we have the world leaders in global health, animal welfare, pandemic prevention, AI safety, and each say, “Hey, EA has the strongest leaders and its ideas and projects are reliably important and successful”, no one will complain about how many free books are handed out.
I broadly agree with this, but at least with AI safety there’s a Goodharting issue: we don’t want AIS researchers optimising for legibly impressive ideas/results/writeups.
I assume there’s a similar-in-principle issue for most cause areas, but it does seem markedly worse for AIS. (given lack of meaningful feedback on the most important issues)
There’s a significant downside even in having some proportion of EA AIS researchers focus on more legible results: it gives a warped impression of useful AIS research to outsiders. This happens by default, since there are many incentives to pick a legibly impressive line of research, and there’ll be more engagement with more readable content.
None of this is to say that I know e.g. MIRI-style research to be the right approach. However, I do think we need to be careful not to optimise for the appearance of strong object level work.
The absolute strongest answer to most critiques or problems that has been mentioned recently is—strong object level work.
If EA has the best leaders, the best projects and the most success in executing genuinely altruistic work, especially in a broad range of cause areas, that is a complete and total answer to:
“Too much” spending
billionaire funding/asking people to donate income
most “epistemic issues”, especially with success in multiple cause areas
If we have the world leaders in global health, animal welfare, pandemic prevention, AI safety, and each say, “Hey, EA has the strongest leaders and its ideas and projects are reliably important and successful”, no one will complain about how many free books are handed out.
I broadly agree with this, but at least with AI safety there’s a Goodharting issue: we don’t want AIS researchers optimising for legibly impressive ideas/results/writeups.
I assume there’s a similar-in-principle issue for most cause areas, but it does seem markedly worse for AIS. (given lack of meaningful feedback on the most important issues)
There’s a significant downside even in having some proportion of EA AIS researchers focus on more legible results: it gives a warped impression of useful AIS research to outsiders. This happens by default, since there are many incentives to pick a legibly impressive line of research, and there’ll be more engagement with more readable content.
None of this is to say that I know e.g. MIRI-style research to be the right approach.
However, I do think we need to be careful not to optimise for the appearance of strong object level work.
I agree and think this is an argument for investing in cause specific groups rather than generalized community building.