I think it’s simplistic to reduce the critique to “minority opinion bad”. At the very least, you need to reduce it to “minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by it bad”. Bentham argued for diminishing his own privilege over others, to give other people MORE choice, irrespective of their power and wealth and with no benefit to him. There is a difference imo
My argument here is about whether we should be more suspicious of a view if it is held by the majority or the minority. Whether that is true seems to me to be mainly dependent on the object-level quality of the belief and not whether it is held by the majority or not—that is a very weak indicator, as the examples of slavery, women, racism, homosexuality etc illustrate.
I don’t think your piece argues that TUA reinforces existing power relations. The main things that proponents of TUA have diverted resources to are: engineered pandemics, AI alignment, nuclear war and to a lesser extent climate change. How does any of this entrench existing power relations?
nitpick, but it is also not true that the view you criticise is mainly advocated for by billionaires. Obviously, a tiny minority of billionaires are longtermists and a tiny minority of longtermists are billionaires.
The main things that proponents of TUA have diverted resources to are: engineered pandemics, AI alignment, nuclear war and to a lesser extent climate change. How does any of this entrench existing power relations?
This is moving money to mostly wealthy, Western organisations and researchers, that would’ve otherwise gone to the global poor. So the counterfactual impact is of entrenching wealth disparity.
I think it is very unclear whether it is true that diverting money to these organisations would entrench wealth disparity. Examining the demographics of the organisations funded is a faulty way to assess the overall effect on global wealth inequality—the main effect these organisations will have is via the actions they take rather than the take home pay of their staff.
Consider pandemic risk. Open Phil has been the main funder in this space for several years and if they had their way, the world would have been much better prepared for covid. Covid has been a complete disaster for low and middle-income countries, and has driven millions into extreme poverty. I don’t think the net effect of pandemic preparedness funding is bad for the global poor. Similarly, with AI safety, if you actually believe that transformative AI will arrive in 20 years, then ensuring the development of transformative AI goes well is extremely consequential for people in low and middle-income countries.
I did not mean the demographic composition of organisations to be the main contributor to their impact. Rather, what I’m saying is that that is the only impact we can be completely sure of. Any further impact depends on your beliefs regarding the value of the kind of work done.
I personally will probably go to the EA Long Term Future Fund for funding in the not so distant future. My preferred career is in beneficial AI. So obviously I believe the work in the area has value that makes it worth putting money into.
But looking at it as an outsider, it’s obvious that I (Guy) have an incentive to evaluate that work as important, seeing as I may personally profit from that view. Rather, if you think AI risk—or even existential risk as a whole—is some orders of magnitude less important than it’s laid out to be in EA—then the only straightforward impact of supporting X-risk research is in who gets the money and who does not. If you think any AI research is actually harmful, then the expected value of funding this is even worse.
I think it’s simplistic to reduce the critique to “minority opinion bad”. At the very least, you need to reduce it to “minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by it bad”. Bentham argued for diminishing his own privilege over others, to give other people MORE choice, irrespective of their power and wealth and with no benefit to him. There is a difference imo
My argument here is about whether we should be more suspicious of a view if it is held by the majority or the minority. Whether that is true seems to me to be mainly dependent on the object-level quality of the belief and not whether it is held by the majority or not—that is a very weak indicator, as the examples of slavery, women, racism, homosexuality etc illustrate.
I don’t think your piece argues that TUA reinforces existing power relations. The main things that proponents of TUA have diverted resources to are: engineered pandemics, AI alignment, nuclear war and to a lesser extent climate change. How does any of this entrench existing power relations?
nitpick, but it is also not true that the view you criticise is mainly advocated for by billionaires. Obviously, a tiny minority of billionaires are longtermists and a tiny minority of longtermists are billionaires.
This is moving money to mostly wealthy, Western organisations and researchers, that would’ve otherwise gone to the global poor. So the counterfactual impact is of entrenching wealth disparity.
I think it is very unclear whether it is true that diverting money to these organisations would entrench wealth disparity. Examining the demographics of the organisations funded is a faulty way to assess the overall effect on global wealth inequality—the main effect these organisations will have is via the actions they take rather than the take home pay of their staff.
Consider pandemic risk. Open Phil has been the main funder in this space for several years and if they had their way, the world would have been much better prepared for covid. Covid has been a complete disaster for low and middle-income countries, and has driven millions into extreme poverty. I don’t think the net effect of pandemic preparedness funding is bad for the global poor. Similarly, with AI safety, if you actually believe that transformative AI will arrive in 20 years, then ensuring the development of transformative AI goes well is extremely consequential for people in low and middle-income countries.
I did not mean the demographic composition of organisations to be the main contributor to their impact. Rather, what I’m saying is that that is the only impact we can be completely sure of. Any further impact depends on your beliefs regarding the value of the kind of work done.
I personally will probably go to the EA Long Term Future Fund for funding in the not so distant future. My preferred career is in beneficial AI. So obviously I believe the work in the area has value that makes it worth putting money into.
But looking at it as an outsider, it’s obvious that I (Guy) have an incentive to evaluate that work as important, seeing as I may personally profit from that view. Rather, if you think AI risk—or even existential risk as a whole—is some orders of magnitude less important than it’s laid out to be in EA—then the only straightforward impact of supporting X-risk research is in who gets the money and who does not. If you think any AI research is actually harmful, then the expected value of funding this is even worse.
Do you mean that most people advocating for techno-positive longtermist concern for x-risk are billionaires, or that most billionaires so advocate?I don’t think either claim is true (or even close to true).It’s also not the claim being made:
You’re right, my mistake.