For what it’s worth, I think you shouldn’t worry about the first two bullets. The way you as an individual or EA as a community will have big impact is through specialization. Being an excellent communicator of EA ideas is going to have way bigger and potentially compounding returns than your personal dietary or donation choices (assuming you aren’t very wealthy). If stressing about the latter takes away from the former, that seems like a mistake to be worried about.
I also shouldn’t comment without answering the question:
I balk at thorny or under-scoped research/problems that could be very valuable
It feels aversive to dig into something without a sense of where I’ll end up or whether I’ll even get anywhere
If there’s a way I can bend what I already know/am good at into the shape of the problem, I’ll do that instead
One way this happens is that I only seek out information/arguments/context that are legible to me, specifically more big picture/social science-oriented things like Holden, Joe Carlsmith or Carl Shulman, even though understanding whether technical aspects of AI alignment/evals make sense is a bigger and more unduly under-explored crux for understanding what matters
I fail to be a team player in a lot of ways.
I have my own sense of what my team/org’s priorities should be
I expect others around me to intuit, adopt these priorities with no or minimal communication
When we don’t agree or reach consensus and there’s a route for me to avoid resolving the tension, I take the avoidant route. Things that I don’t think are important, but others do, don’t happen
For what it’s worth, I think you shouldn’t worry about the first two bullets. The way you as an individual or EA as a community will have big impact is through specialization. Being an excellent communicator of EA ideas is going to have way bigger and potentially compounding returns than your personal dietary or donation choices (assuming you aren’t very wealthy). If stressing about the latter takes away from the former, that seems like a mistake to be worried about.
I also shouldn’t comment without answering the question:
I balk at thorny or under-scoped research/problems that could be very valuable
It feels aversive to dig into something without a sense of where I’ll end up or whether I’ll even get anywhere
If there’s a way I can bend what I already know/am good at into the shape of the problem, I’ll do that instead
One way this happens is that I only seek out information/arguments/context that are legible to me, specifically more big picture/social science-oriented things like Holden, Joe Carlsmith or Carl Shulman, even though understanding whether technical aspects of AI alignment/evals make sense is a bigger and more unduly under-explored crux for understanding what matters
I fail to be a team player in a lot of ways.
I have my own sense of what my team/org’s priorities should be
I expect others around me to intuit, adopt these priorities with no or minimal communication
When we don’t agree or reach consensus and there’s a route for me to avoid resolving the tension, I take the avoidant route. Things that I don’t think are important, but others do, don’t happen