I suspect that one could make a chart to show a bottle neck in a lot of different places. From my understanding GW does not seem to think what the YED chart would imply.
The yED chart shows Givewell being of the opinion that poverty alleviation is desirable and quite likely the best allocation of resources in 2013. This does not seem to be a controversial claim. There are no other claims about Givewell’s opinion in any other year.
Notice also that the arrows in that chart mean only that empirically it has been observed that individuals espousing one yellow opinion frequently change their opinion to one below it. The reverse can also happen, though it is less frequent, and frequenty people spend years, if not decades, within a particular opinion.
Can you give an example of a chart where a bottleneck would occur in a node that is not either the X-risk node, or the transition to the far future node? I would be interested in seeing patterns that escaped my perception, and it is really easy to change the yED graph if you download it.
The bottom part of your diagram has lots of boxes in it. Further up, “poverty alleviation is most important” is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from “poverty alleviation” to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn’t exclusively belong below your existential risk bottleneck), education, etc. There could be lots of arrows going every which way in amongst them, and “poverty alleviation is most important” would be a bottleneck.
Similarly (though I am less familiar with it), if you start by weighting animal welfare highly, then there are lots of options for working on that (leafleting, lobbying, protesting, others?).
I agree that there’s some real sense in which existential risk or far future concerns is more of a bottleneck than human poverty alleviation or animal welfare—there’s a bigger “cause-distance” between colonising Mars and working on AI than the “cause-distance” between health system logistics and lobbying to remove trade restrictions. But I think the level of detail in all those boxes about AI and “insight” overstates the difference.
I suspect that one could make a chart to show a bottle neck in a lot of different places. From my understanding GW does not seem to think what the YED chart would imply.
“I reject the idea that placing high value on the far future – no matter how high the value – makes it clear that one should focus on reducing the risks of catastrophes” http://blog.givewell.org/2014/07/03/the-moral-value-of-the-far-future/
The yED chart shows Givewell being of the opinion that poverty alleviation is desirable and quite likely the best allocation of resources in 2013. This does not seem to be a controversial claim. There are no other claims about Givewell’s opinion in any other year.
Notice also that the arrows in that chart mean only that empirically it has been observed that individuals espousing one yellow opinion frequently change their opinion to one below it. The reverse can also happen, though it is less frequent, and frequenty people spend years, if not decades, within a particular opinion.
Can you give an example of a chart where a bottleneck would occur in a node that is not either the X-risk node, or the transition to the far future node? I would be interested in seeing patterns that escaped my perception, and it is really easy to change the yED graph if you download it.
The bottom part of your diagram has lots of boxes in it. Further up, “poverty alleviation is most important” is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from “poverty alleviation” to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn’t exclusively belong below your existential risk bottleneck), education, etc. There could be lots of arrows going every which way in amongst them, and “poverty alleviation is most important” would be a bottleneck.
Similarly (though I am less familiar with it), if you start by weighting animal welfare highly, then there are lots of options for working on that (leafleting, lobbying, protesting, others?).
I agree that there’s some real sense in which existential risk or far future concerns is more of a bottleneck than human poverty alleviation or animal welfare—there’s a bigger “cause-distance” between colonising Mars and working on AI than the “cause-distance” between health system logistics and lobbying to remove trade restrictions. But I think the level of detail in all those boxes about AI and “insight” overstates the difference.