I think it is very unclear whether it is true that diverting money to these organisations would entrench wealth disparity. Examining the demographics of the organisations funded is a faulty way to assess the overall effect on global wealth inequality—the main effect these organisations will have is via the actions they take rather than the take home pay of their staff.
Consider pandemic risk. Open Phil has been the main funder in this space for several years and if they had their way, the world would have been much better prepared for covid. Covid has been a complete disaster for low and middle-income countries, and has driven millions into extreme poverty. I don’t think the net effect of pandemic preparedness funding is bad for the global poor. Similarly, with AI safety, if you actually believe that transformative AI will arrive in 20 years, then ensuring the development of transformative AI goes well is extremely consequential for people in low and middle-income countries.
I did not mean the demographic composition of organisations to be the main contributor to their impact. Rather, what I’m saying is that that is the only impact we can be completely sure of. Any further impact depends on your beliefs regarding the value of the kind of work done.
I personally will probably go to the EA Long Term Future Fund for funding in the not so distant future. My preferred career is in beneficial AI. So obviously I believe the work in the area has value that makes it worth putting money into.
But looking at it as an outsider, it’s obvious that I (Guy) have an incentive to evaluate that work as important, seeing as I may personally profit from that view. Rather, if you think AI risk—or even existential risk as a whole—is some orders of magnitude less important than it’s laid out to be in EA—then the only straightforward impact of supporting X-risk research is in who gets the money and who does not. If you think any AI research is actually harmful, then the expected value of funding this is even worse.
I think it is very unclear whether it is true that diverting money to these organisations would entrench wealth disparity. Examining the demographics of the organisations funded is a faulty way to assess the overall effect on global wealth inequality—the main effect these organisations will have is via the actions they take rather than the take home pay of their staff.
Consider pandemic risk. Open Phil has been the main funder in this space for several years and if they had their way, the world would have been much better prepared for covid. Covid has been a complete disaster for low and middle-income countries, and has driven millions into extreme poverty. I don’t think the net effect of pandemic preparedness funding is bad for the global poor. Similarly, with AI safety, if you actually believe that transformative AI will arrive in 20 years, then ensuring the development of transformative AI goes well is extremely consequential for people in low and middle-income countries.
I did not mean the demographic composition of organisations to be the main contributor to their impact. Rather, what I’m saying is that that is the only impact we can be completely sure of. Any further impact depends on your beliefs regarding the value of the kind of work done.
I personally will probably go to the EA Long Term Future Fund for funding in the not so distant future. My preferred career is in beneficial AI. So obviously I believe the work in the area has value that makes it worth putting money into.
But looking at it as an outsider, it’s obvious that I (Guy) have an incentive to evaluate that work as important, seeing as I may personally profit from that view. Rather, if you think AI risk—or even existential risk as a whole—is some orders of magnitude less important than it’s laid out to be in EA—then the only straightforward impact of supporting X-risk research is in who gets the money and who does not. If you think any AI research is actually harmful, then the expected value of funding this is even worse.