I think itâs true that itâs easy for EAâs to lose sight of the big picture, but to me, the reason for that is simpleâhumans, in general, are terrible at seeing the bigger picture. If anything, it seems to me that the EA frameworks are better than most altruistic endeavours at seeing the big pictureâmost altruistic endeavors donât get past the stage of âSee good thing and do itâ, whereas EAâs tend to be asking if X is really the most effective thing they can do, which invariably involves looking at a bigger picture than the immediate thing. In my own field of AI safety, thinking about the big picture is an idea that people are routinely exposed to. Researchers often do exercises like backchaining (ask what the main goal is, like âMake AI go wellâ and figure out how to move backwards from that to what you should be doing now) and theory of change (Writing out specifically what problem you want to help, what you want to achieve, and how that will help)
Do you think there are specific vulnerabilities that EAâs have that make them lose sight of the bigger picture, that non-EA altruistic people avoid?
For the point of foregoing fulfillmentâIâm not sure exactly what fulfillment you think people are foregoing, here. Is it the fulfillment of having lots of money? The fulfillment of working directly on the worldâs biggest problems?
Looking at the two critiques in reverse order:
I think itâs true that itâs easy for EAâs to lose sight of the big picture, but to me, the reason for that is simpleâhumans, in general, are terrible at seeing the bigger picture. If anything, it seems to me that the EA frameworks are better than most altruistic endeavours at seeing the big pictureâmost altruistic endeavors donât get past the stage of âSee good thing and do itâ, whereas EAâs tend to be asking if X is really the most effective thing they can do, which invariably involves looking at a bigger picture than the immediate thing. In my own field of AI safety, thinking about the big picture is an idea that people are routinely exposed to. Researchers often do exercises like backchaining (ask what the main goal is, like âMake AI go wellâ and figure out how to move backwards from that to what you should be doing now) and theory of change (Writing out specifically what problem you want to help, what you want to achieve, and how that will help)
Do you think there are specific vulnerabilities that EAâs have that make them lose sight of the bigger picture, that non-EA altruistic people avoid?
For the point of foregoing fulfillmentâIâm not sure exactly what fulfillment you think people are foregoing, here. Is it the fulfillment of having lots of money? The fulfillment of working directly on the worldâs biggest problems?