I made a map with the opinions of many Effective Altruists and how they changed over the years.
My sample was biased by people I live with and read. I tried to account for many different starting points, and of course, I got many people’s opinions wrong, since I was just estimating them.
Nevertheless there seems to be a bottleneck on accepting Bostrom’s Existential Risk as The Most Important Task for Humanity. If the trend is correct, and if it continues, it would generate many interesting predictions about where new EA’s will come from.
I suspect that one could make a chart to show a bottle neck in a lot of different places. From my understanding GW does not seem to think what the YED chart would imply.
The yED chart shows Givewell being of the opinion that poverty alleviation is desirable and quite likely the best allocation of resources in 2013. This does not seem to be a controversial claim. There are no other claims about Givewell’s opinion in any other year.
Notice also that the arrows in that chart mean only that empirically it has been observed that individuals espousing one yellow opinion frequently change their opinion to one below it. The reverse can also happen, though it is less frequent, and frequenty people spend years, if not decades, within a particular opinion.
Can you give an example of a chart where a bottleneck would occur in a node that is not either the X-risk node, or the transition to the far future node? I would be interested in seeing patterns that escaped my perception, and it is really easy to change the yED graph if you download it.
The bottom part of your diagram has lots of boxes in it. Further up, “poverty alleviation is most important” is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from “poverty alleviation” to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn’t exclusively belong below your existential risk bottleneck), education, etc. There could be lots of arrows going every which way in amongst them, and “poverty alleviation is most important” would be a bottleneck.
Similarly (though I am less familiar with it), if you start by weighting animal welfare highly, then there are lots of options for working on that (leafleting, lobbying, protesting, others?).
I agree that there’s some real sense in which existential risk or far future concerns is more of a bottleneck than human poverty alleviation or animal welfare—there’s a bigger “cause-distance” between colonising Mars and working on AI than the “cause-distance” between health system logistics and lobbying to remove trade restrictions. But I think the level of detail in all those boxes about AI and “insight” overstates the difference.
Wow, this is amazing! It brings to mind the idea of a “what kind of altruist are you?” quiz, with the answer providing a link to the most relevant essay or two which might change your mind about something...
I made a map with the opinions of many Effective Altruists and how they changed over the years.
My sample was biased by people I live with and read. I tried to account for many different starting points, and of course, I got many people’s opinions wrong, since I was just estimating them.
Nevertheless there seems to be a bottleneck on accepting Bostrom’s Existential Risk as The Most Important Task for Humanity. If the trend is correct, and if it continues, it would generate many interesting predictions about where new EA’s will come from.
Here, have a look:
http://i.imgur.com/jQhoQOZ.png
For the file itself (open through the program yED by clicking File → Open URL and copying the link below):
https://dl.dropboxusercontent.com/u/72402501/EA%20flowchart%20Web.graphml
I suspect that one could make a chart to show a bottle neck in a lot of different places. From my understanding GW does not seem to think what the YED chart would imply.
“I reject the idea that placing high value on the far future – no matter how high the value – makes it clear that one should focus on reducing the risks of catastrophes” http://blog.givewell.org/2014/07/03/the-moral-value-of-the-far-future/
The yED chart shows Givewell being of the opinion that poverty alleviation is desirable and quite likely the best allocation of resources in 2013. This does not seem to be a controversial claim. There are no other claims about Givewell’s opinion in any other year.
Notice also that the arrows in that chart mean only that empirically it has been observed that individuals espousing one yellow opinion frequently change their opinion to one below it. The reverse can also happen, though it is less frequent, and frequenty people spend years, if not decades, within a particular opinion.
Can you give an example of a chart where a bottleneck would occur in a node that is not either the X-risk node, or the transition to the far future node? I would be interested in seeing patterns that escaped my perception, and it is really easy to change the yED graph if you download it.
The bottom part of your diagram has lots of boxes in it. Further up, “poverty alleviation is most important” is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from “poverty alleviation” to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn’t exclusively belong below your existential risk bottleneck), education, etc. There could be lots of arrows going every which way in amongst them, and “poverty alleviation is most important” would be a bottleneck.
Similarly (though I am less familiar with it), if you start by weighting animal welfare highly, then there are lots of options for working on that (leafleting, lobbying, protesting, others?).
I agree that there’s some real sense in which existential risk or far future concerns is more of a bottleneck than human poverty alleviation or animal welfare—there’s a bigger “cause-distance” between colonising Mars and working on AI than the “cause-distance” between health system logistics and lobbying to remove trade restrictions. But I think the level of detail in all those boxes about AI and “insight” overstates the difference.
Is it possible to get a picture of the graph, or does that not make sense?
Here you go: image
Thank you Ryan, I tried doing this but failed to be tech savvy enough.
No problem. There’s an Export button in YeD’s file menu. Then, you have the image file that you can upload Imgur.
Thanks!
Wow, this is amazing! It brings to mind the idea of a “what kind of altruist are you?” quiz, with the answer providing a link to the most relevant essay or two which might change your mind about something...