You seem to assume that we should be especially suspicious of a view if it is not held by a majority of the global population. Over history, the views of the global majority seem to me to have been an extremely poor guide to accurate moral beliefs. For example, a few hundred years ago, most people had abhorrent views about animals, women and people of other races. By the arguments here, do you think that people like Benjamin Lay, Bentham and Mill should not have advocated for change in these areas, including advocating for changes in policy?
As I said in a different but related context earlier this week, “If a small, non-representative group disagrees with the majority of humans, we should wonder why, and given base rates and the outside view, worry about failure modes that have affected similar small groups in the past.”
I do think we should worry about failure modes and being wrong. But I think the main reason to do that is that people are often wrong, they are bad at reasoning, and subject to a host of biases. The fact that we are in a minority of the global population is an extremely weak indicator of being wrong. The majority has been gravely wrong on many moral and empirical questions in the past and today. It’s not at all clear that the base rate of being wrong for ‘minority view’ vs ‘majority view’ is higher or not, and that question is extremely difficult to answer because there are lots of ways of slicing up the minority you are referring to.
I feel like there’s just a crazy number of minority views (in the limit a bunch of psychoses held by just one individual), most of which must be wrong. We’re more likely to hear about minority views which later turn out to be correct, but it seems very implausible that the base rate of correctness is higher for minority views than majority views.
On the other hand I think there’s some distinction to be drawn between “minority view disagrees with strongly held majority view” and “minority view concerns something that majority mostly ignores / doesn’t have a view on”.
that is a fair point. departures from global majority opinion still seems like a pretty weak ‘fire alarm’ for being wrong. Taking a position that is eg contrary to most experts on a topic would be a much greater warning sign.
I see how this could be misread. I’ll reformulate the statement; “If our small, non-representative group comes to a conclusion, we should wonder, given base rates about correctness in general and the outside view, about which failure modes have affected similar small groups in the past, and consider if they apply, and how we might be wrong or misguided.”
So yes, errors are common to all groups, and being a minority isn’t a indicator of truth, which I mistakenly implied. But the way in which groups are wrong is influenced by group-level reasoning fallacies and biases, which are a product of both individual fallacies and characteristics of the group. That’s why I think that investigating how previous similar groups failed seems like a particularly useful way to identify relevant failure modes.
I think it’s simplistic to reduce the critique to “minority opinion bad”. At the very least, you need to reduce it to “minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by it bad”. Bentham argued for diminishing his own privilege over others, to give other people MORE choice, irrespective of their power and wealth and with no benefit to him. There is a difference imo
My argument here is about whether we should be more suspicious of a view if it is held by the majority or the minority. Whether that is true seems to me to be mainly dependent on the object-level quality of the belief and not whether it is held by the majority or not—that is a very weak indicator, as the examples of slavery, women, racism, homosexuality etc illustrate.
I don’t think your piece argues that TUA reinforces existing power relations. The main things that proponents of TUA have diverted resources to are: engineered pandemics, AI alignment, nuclear war and to a lesser extent climate change. How does any of this entrench existing power relations?
nitpick, but it is also not true that the view you criticise is mainly advocated for by billionaires. Obviously, a tiny minority of billionaires are longtermists and a tiny minority of longtermists are billionaires.
The main things that proponents of TUA have diverted resources to are: engineered pandemics, AI alignment, nuclear war and to a lesser extent climate change. How does any of this entrench existing power relations?
This is moving money to mostly wealthy, Western organisations and researchers, that would’ve otherwise gone to the global poor. So the counterfactual impact is of entrenching wealth disparity.
I think it is very unclear whether it is true that diverting money to these organisations would entrench wealth disparity. Examining the demographics of the organisations funded is a faulty way to assess the overall effect on global wealth inequality—the main effect these organisations will have is via the actions they take rather than the take home pay of their staff.
Consider pandemic risk. Open Phil has been the main funder in this space for several years and if they had their way, the world would have been much better prepared for covid. Covid has been a complete disaster for low and middle-income countries, and has driven millions into extreme poverty. I don’t think the net effect of pandemic preparedness funding is bad for the global poor. Similarly, with AI safety, if you actually believe that transformative AI will arrive in 20 years, then ensuring the development of transformative AI goes well is extremely consequential for people in low and middle-income countries.
I did not mean the demographic composition of organisations to be the main contributor to their impact. Rather, what I’m saying is that that is the only impact we can be completely sure of. Any further impact depends on your beliefs regarding the value of the kind of work done.
I personally will probably go to the EA Long Term Future Fund for funding in the not so distant future. My preferred career is in beneficial AI. So obviously I believe the work in the area has value that makes it worth putting money into.
But looking at it as an outsider, it’s obvious that I (Guy) have an incentive to evaluate that work as important, seeing as I may personally profit from that view. Rather, if you think AI risk—or even existential risk as a whole—is some orders of magnitude less important than it’s laid out to be in EA—then the only straightforward impact of supporting X-risk research is in who gets the money and who does not. If you think any AI research is actually harmful, then the expected value of funding this is even worse.
I had the same reaction as this, in that the dominant worldview today views extreme levels of animal suffering as acceptable but most of us would agree it’s not, and believe we should do our utmost to change it.
I think the difference between the examples you’ve mentioned and the parallel to existential risk is with the qualifier Luke and Carla provided in the text (emphasis mine):
Tying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority of the world is undemocratic and philosophically tenuous
Where the key difference is that the study of existential risk is tied to the fate of humanity in ways that animal welfare, misogyny and racism aren’t (arguably the latter two examples might influence the direction of humanity significantly but probably not whether humanity ceases to exist).
I’m not necessarily convinced that existential risk studies is so different to the examples you’ve mentioned that we need to approach it in a much more democratic way but I do think the qualifiers given by the authors mean the analogies you’ve drawn aren’t that water-tight.
Most whites had abhorent views on race at certain points in the past (probably not before 1500 though, unless Medieval antisemitism counts) but that is weak evidence that most people did, since whites were always a minority. I’m not sure many of us know what if any racial views people held in Nigeria, Iran, China or India in 1780.
I seem to remember learning about rampant racism in China helping to cause the Taiping rebellion? And there are enormous amounts of racism and sectarianism today outside Western countries—look at the Rohingya genocide, the Rwanda genocide, the Nigerian civil war, the current Ethiopian civil war, and the Lebanese political crisis for a few examples.
Every one of these examples should be taken with skepticism as this is far outside my area of expertise. But while I agree with the sentiment that we often conflate the history of the world with the history of white people, I’m not sure it’s true in this specific case.
i’d be pretty surprised if almost everyone didn’t have strongly racist views in 1780. Anti-black views are very prevalent in India and China today, as I understand it. eg Gandhi had pretty racist attitudes.
You seem to assume that we should be especially suspicious of a view if it is not held by a majority of the global population. Over history, the views of the global majority seem to me to have been an extremely poor guide to accurate moral beliefs. For example, a few hundred years ago, most people had abhorrent views about animals, women and people of other races. By the arguments here, do you think that people like Benjamin Lay, Bentham and Mill should not have advocated for change in these areas, including advocating for changes in policy?
As I said in a different but related context earlier this week, “If a small, non-representative group disagrees with the majority of humans, we should wonder why, and given base rates and the outside view, worry about failure modes that have affected similar small groups in the past.”
I do think we should worry about failure modes and being wrong. But I think the main reason to do that is that people are often wrong, they are bad at reasoning, and subject to a host of biases. The fact that we are in a minority of the global population is an extremely weak indicator of being wrong. The majority has been gravely wrong on many moral and empirical questions in the past and today. It’s not at all clear that the base rate of being wrong for ‘minority view’ vs ‘majority view’ is higher or not, and that question is extremely difficult to answer because there are lots of ways of slicing up the minority you are referring to.
I feel like there’s just a crazy number of minority views (in the limit a bunch of psychoses held by just one individual), most of which must be wrong. We’re more likely to hear about minority views which later turn out to be correct, but it seems very implausible that the base rate of correctness is higher for minority views than majority views.
On the other hand I think there’s some distinction to be drawn between “minority view disagrees with strongly held majority view” and “minority view concerns something that majority mostly ignores / doesn’t have a view on”.
that is a fair point. departures from global majority opinion still seems like a pretty weak ‘fire alarm’ for being wrong. Taking a position that is eg contrary to most experts on a topic would be a much greater warning sign.
I see how this could be misread. I’ll reformulate the statement;
“If our small, non-representative group comes to a conclusion, we should wonder, given base rates about correctness in general and the outside view, about which failure modes have affected similar small groups in the past, and consider if they apply, and how we might be wrong or misguided.”
So yes, errors are common to all groups, and being a minority isn’t a indicator of truth, which I mistakenly implied. But the way in which groups are wrong is influenced by group-level reasoning fallacies and biases, which are a product of both individual fallacies and characteristics of the group. That’s why I think that investigating how previous similar groups failed seems like a particularly useful way to identify relevant failure modes.
yes I agree with that.
I think it’s simplistic to reduce the critique to “minority opinion bad”. At the very least, you need to reduce it to “minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by it bad”. Bentham argued for diminishing his own privilege over others, to give other people MORE choice, irrespective of their power and wealth and with no benefit to him. There is a difference imo
My argument here is about whether we should be more suspicious of a view if it is held by the majority or the minority. Whether that is true seems to me to be mainly dependent on the object-level quality of the belief and not whether it is held by the majority or not—that is a very weak indicator, as the examples of slavery, women, racism, homosexuality etc illustrate.
I don’t think your piece argues that TUA reinforces existing power relations. The main things that proponents of TUA have diverted resources to are: engineered pandemics, AI alignment, nuclear war and to a lesser extent climate change. How does any of this entrench existing power relations?
nitpick, but it is also not true that the view you criticise is mainly advocated for by billionaires. Obviously, a tiny minority of billionaires are longtermists and a tiny minority of longtermists are billionaires.
This is moving money to mostly wealthy, Western organisations and researchers, that would’ve otherwise gone to the global poor. So the counterfactual impact is of entrenching wealth disparity.
I think it is very unclear whether it is true that diverting money to these organisations would entrench wealth disparity. Examining the demographics of the organisations funded is a faulty way to assess the overall effect on global wealth inequality—the main effect these organisations will have is via the actions they take rather than the take home pay of their staff.
Consider pandemic risk. Open Phil has been the main funder in this space for several years and if they had their way, the world would have been much better prepared for covid. Covid has been a complete disaster for low and middle-income countries, and has driven millions into extreme poverty. I don’t think the net effect of pandemic preparedness funding is bad for the global poor. Similarly, with AI safety, if you actually believe that transformative AI will arrive in 20 years, then ensuring the development of transformative AI goes well is extremely consequential for people in low and middle-income countries.
I did not mean the demographic composition of organisations to be the main contributor to their impact. Rather, what I’m saying is that that is the only impact we can be completely sure of. Any further impact depends on your beliefs regarding the value of the kind of work done.
I personally will probably go to the EA Long Term Future Fund for funding in the not so distant future. My preferred career is in beneficial AI. So obviously I believe the work in the area has value that makes it worth putting money into.
But looking at it as an outsider, it’s obvious that I (Guy) have an incentive to evaluate that work as important, seeing as I may personally profit from that view. Rather, if you think AI risk—or even existential risk as a whole—is some orders of magnitude less important than it’s laid out to be in EA—then the only straightforward impact of supporting X-risk research is in who gets the money and who does not. If you think any AI research is actually harmful, then the expected value of funding this is even worse.
Do you mean that most people advocating for techno-positive longtermist concern for x-risk are billionaires, or that most billionaires so advocate?I don’t think either claim is true (or even close to true).It’s also not the claim being made:
You’re right, my mistake.
I had the same reaction as this, in that the dominant worldview today views extreme levels of animal suffering as acceptable but most of us would agree it’s not, and believe we should do our utmost to change it.
I think the difference between the examples you’ve mentioned and the parallel to existential risk is with the qualifier Luke and Carla provided in the text (emphasis mine):
Where the key difference is that the study of existential risk is tied to the fate of humanity in ways that animal welfare, misogyny and racism aren’t (arguably the latter two examples might influence the direction of humanity significantly but probably not whether humanity ceases to exist).
I’m not necessarily convinced that existential risk studies is so different to the examples you’ve mentioned that we need to approach it in a much more democratic way but I do think the qualifiers given by the authors mean the analogies you’ve drawn aren’t that water-tight.
Most whites had abhorent views on race at certain points in the past (probably not before 1500 though, unless Medieval antisemitism counts) but that is weak evidence that most people did, since whites were always a minority. I’m not sure many of us know what if any racial views people held in Nigeria, Iran, China or India in 1780.
I seem to remember learning about rampant racism in China helping to cause the Taiping rebellion? And there are enormous amounts of racism and sectarianism today outside Western countries—look at the Rohingya genocide, the Rwanda genocide, the Nigerian civil war, the current Ethiopian civil war, and the Lebanese political crisis for a few examples.
Every one of these examples should be taken with skepticism as this is far outside my area of expertise. But while I agree with the sentiment that we often conflate the history of the world with the history of white people, I’m not sure it’s true in this specific case.
Yeah, you’re probably right. It’s just I got a strong “history=Western history” vibe from the comment I was responding to, but maybe that was unfair!
i’d be pretty surprised if almost everyone didn’t have strongly racist views in 1780. Anti-black views are very prevalent in India and China today, as I understand it. eg Gandhi had pretty racist attitudes.
I think there is a “not” missing: “view if it is held by a majority of the global population.”
sorry, yeah corrected