The bottom part of your diagram has lots of boxes in it. Further up, “poverty alleviation is most important” is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from “poverty alleviation” to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn’t exclusively belong below your existential risk bottleneck), education, etc. There could be lots of arrows going every which way in amongst them, and “poverty alleviation is most important” would be a bottleneck.
Similarly (though I am less familiar with it), if you start by weighting animal welfare highly, then there are lots of options for working on that (leafleting, lobbying, protesting, others?).
I agree that there’s some real sense in which existential risk or far future concerns is more of a bottleneck than human poverty alleviation or animal welfare—there’s a bigger “cause-distance” between colonising Mars and working on AI than the “cause-distance” between health system logistics and lobbying to remove trade restrictions. But I think the level of detail in all those boxes about AI and “insight” overstates the difference.
The bottom part of your diagram has lots of boxes in it. Further up, “poverty alleviation is most important” is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from “poverty alleviation” to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn’t exclusively belong below your existential risk bottleneck), education, etc. There could be lots of arrows going every which way in amongst them, and “poverty alleviation is most important” would be a bottleneck.
Similarly (though I am less familiar with it), if you start by weighting animal welfare highly, then there are lots of options for working on that (leafleting, lobbying, protesting, others?).
I agree that there’s some real sense in which existential risk or far future concerns is more of a bottleneck than human poverty alleviation or animal welfare—there’s a bigger “cause-distance” between colonising Mars and working on AI than the “cause-distance” between health system logistics and lobbying to remove trade restrictions. But I think the level of detail in all those boxes about AI and “insight” overstates the difference.