I think it’s true that it’s easy for EA’s to lose sight of the big picture, but to me, the reason for that is simple—humans, in general, are terrible at seeing the bigger picture. If anything, it seems to me that the EA frameworks are better than most altruistic endeavours at seeing the big picture—most altruistic endeavors don’t get past the stage of “See good thing and do it”, whereas EA’s tend to be asking if X is really the most effective thing they can do, which invariably involves looking at a bigger picture than the immediate thing. In my own field of AI safety, thinking about the big picture is an idea that people are routinely exposed to. Researchers often do exercises like backchaining (ask what the main goal is, like “Make AI go well” and figure out how to move backwards from that to what you should be doing now) and theory of change (Writing out specifically what problem you want to help, what you want to achieve, and how that will help)
Do you think there are specific vulnerabilities that EA’s have that make them lose sight of the bigger picture, that non-EA altruistic people avoid?
For the point of foregoing fulfillment—I’m not sure exactly what fulfillment you think people are foregoing, here. Is it the fulfillment of having lots of money? The fulfillment of working directly on the world’s biggest problems?
Looking at the two critiques in reverse order:
I think it’s true that it’s easy for EA’s to lose sight of the big picture, but to me, the reason for that is simple—humans, in general, are terrible at seeing the bigger picture. If anything, it seems to me that the EA frameworks are better than most altruistic endeavours at seeing the big picture—most altruistic endeavors don’t get past the stage of “See good thing and do it”, whereas EA’s tend to be asking if X is really the most effective thing they can do, which invariably involves looking at a bigger picture than the immediate thing. In my own field of AI safety, thinking about the big picture is an idea that people are routinely exposed to. Researchers often do exercises like backchaining (ask what the main goal is, like “Make AI go well” and figure out how to move backwards from that to what you should be doing now) and theory of change (Writing out specifically what problem you want to help, what you want to achieve, and how that will help)
Do you think there are specific vulnerabilities that EA’s have that make them lose sight of the bigger picture, that non-EA altruistic people avoid?
For the point of foregoing fulfillment—I’m not sure exactly what fulfillment you think people are foregoing, here. Is it the fulfillment of having lots of money? The fulfillment of working directly on the world’s biggest problems?