Downvoted because I’m allergic to applause-lighting pluralism: I would like whitelists and blacklists, tell us which things are underrepresented but another look ought to be taken / tell us which things are underrrepresented and should remain underrepresented. Be specific.
But in this case your criticism is not “effective altruism should be more inclusive of different political views,” it’s “effective altruism’s political views are wrong and they should have different, correct ones,” and it is dishonest to smuggle it in as an inclusivity thing.
I’m not saying this statement/project in particular smells especially nontrustworthy in this way (I in fact think it’s better than most things in the reference class!), but it’s worth pointing out that my prior is pretty tuned and it would take a lot for me to get excited about this sort of thing.
I think you may have strawmanned the case quite a lot here, likely unintentionally (so sorry if the strawman accusation comes off harsh). So let me clarify:
This statement is about ERS not about EA. We want to make a broad, thriving field which EA could and I hope would be a part of creating
I basically think this is wrong. Sure, I, for example, don’t want racists in my community (and I spoek about this to John). But this is a genuine attempt to make a community with a plurality of methods, visions of the futures, ways of doing things. We explicitly don’t want agreement, and we say as much. If you look at the signatories, these are people who hold a whole host of different views (although its probably more homogenous than I would like actually!)
This statement was based on a workshop, and the signatories are only drawn from there. There was large amounts of disagreement at the workshop about a bunch of things, and we definitely would want a space that could sustain this. Indeed, much of the point of setting up an ERS space is to facilitate active disagreement that we can learn from, rather than shut it down and replace one political view with another. So I essentially think your comment here is wrong
It’s not clear to me how you can exclude racists if you “explicitly don’t want agreement”. Presumably there are heuristics for knowing what’s overton-violating and what isn’t, but you need to be specific about how you can improve what people already do. I don’t think the idea of “applause lights” was strawmanning at all toward the top level post, but I see your views are more detailed and careful than that in comments.
Sorry about conflation between ERS and EA, I get it can be a different stream.
I’m not super incurious about certain methodological, sociology of science, metascience opportunities to improve the x-risk community. I just need to see the specifics! For example, I am incurious about frankfurty stuff because it tends to be rather silly, but presumably lots of people are working on peer review and career clout to fix it’s downsides (seems like an economics puzzle to me) and I’m very curious about that.
Firstly I should say the vagueness, whilst frustrating, ie there to both reflect and open up a discussion; it can appear ‘applause-lighty’ if one doesn’t recognise that this is the point of the statement, but I’m not quite sure how it does if you see tye statement as providing a statement of intent and justification for positions that ought to be debated, contested and refined.
On the topic of excluding racists, I think it is basically possible to do given how negatively it impacts a huge range of people from engaging with the community and doing good work. Whilst in my mind it’s clearly motivated by dedp concerns that the racism is unethical and the future that may emerge from a community that has racism so prevelant is deeply problematic, although I’m pretty sure it could still be justified from purely utilitarian perspectives due to the negative impact racism has on the community to function.
I think your demand for specifics here is both admirable, and I can give some that I agree with, and a little besides the point. One of the points we are making is the community at present is such that it doesn’t allow practitioners to explore the sorts of methods we could be using, and many important concepts or assumptions that could aid just aren’t present in this space at present (one example here is I bet if there had been a community containing active discussions around both geoengineering and AI then the idea of simply pausing or stopping AGI development would have been explored much early than it did in ERS at present).
Now onto different conceptual bases. A few examples: we could use concepts from DRR like vulnerability or exposure much more, and I think a vulnerability focused account of xrisk would look very different. We could look at all the different xrisk cascades using causal mapping and find the best points of leverage in this. We could expand out an agents of doom like agenda to target those organisations that produce xrisk most effectively and study specifically how to reduce their power to cause risk the most. Black Swan approaches to xrisk may look further different. Or we may work from assumptions, either ethical (like RCG does) or epistemic (kind of like I do) that GCR is not very seperable from xrisk, so looking at cascades through critical systems may be deeply important. One may take as STS inspired approaches, either directly using methods from there (eg ANT or ethnograohy) or better utilising concepts. As said earlier, Id be interested in a Burkeian political philosophy of xrisk. These are just the research agendas i may be excited about, which is a very narrow slice of what is possible or optimal. The problem is none of these research agendas have the scope and airtime to develop, because current communal structures are ill designed to allow this to happen, and we think this move to pluralism which we lay out (and this forum clearly disagrees with) could allow for many very useful and exciting research agendas to form
Apologies, my bad!
DRR= Disaster Risk Reduction
RCG = Riesgos Catastroficos Globales
STS = Science and Technology Studies/ Science, Technology and Society
ANT= Actor Network Theory
I agree with this. All appeals to so-called ‘diversity’ I have seen on the forum have actually been appeals for EA to be folded into other left wing social movements. I don’t think these arguments are transparent. Folding EA into extinction rebellion, which as I understand is the main aim of heterodox CSER-type approaches in EA, is not a good way to increase diversity in the marketplace of ideas.
‘Differences are a strength’ implies ‘lets have a lot more homophobes, Trump supporters, and people who want China to invade Taiwan’. You don’t actually want this. What you actually want is diversity in the form favoured by the usual preoccupations of left wing thought—identity diversity and more left wing environmentalism.
John’s comment points to another interesting tension.
CSER was indeed intended to be pluralistic and to provide space for heterodox approaches. And the general ‘vibe’ John gestures towards (I take it he’s not intending to be fully literal here—please correct me if I’m misinterpreting, John) is certainly more present at CSER than at other Xrisk orgs. It is also a vibe that is regularly rejected as a majority position in internal CSER full-group discussions. However, some groups and lineages are much more activist and evangelical in their approach than others. Hence they crowd out other heterodoxies and create an outsized external footprint, which can further make it difficult for other heterodoxies to thrive (whether in a centre or community). The CSER-type heterodoxy John and (I suspect) much of EA is familiar with is one that much of CSER is indifferent to, or disagrees with to various degrees. Other heterodoxies are… quieter.
In creating a pluralistic ERS, some diversities (as discussed by others) will be excluded from the get go (perhaps for good reasons, I do not offer comment on this). Of those included/tolerated, some will be far better-equipped with the tools to assert themselves. Disagreements are good, but the field on which disagreements are debated is often not an even one. Figuring out how to navigate this would be one of the key challenges for the approach proposed, I would think.
in response to your first point, I think one of the hopes of creating a pluralistic xrisk community is so that different parts of the community actually understand what work and persepctives each are doing, rather than either characturing them/misrepresenting them (for example, I’ve heard people outside EA assuming all EA XRisk work is basically just what Bostrom says) or just not knowing what other have to say. Ultimately, I think the workshop that this statement came out of did this really well, and so I hope if there is desire to move towards a more pluralistic community (which, perhaps from this forum, there isn’t) then we would better understand each others persepctives and why we disagree, and gain value from this disagreement. One example here is I think I personally have gained huge value from my discussions with John Halstead on climate, and really trying to understand his position.
I agree on the last paragraph, and is definitely a tension we will have to try anda resolve over time. This is one of the reasons we spoke about “we suggest that the power to confer support for different approaches should be distributed among the community rather than allocated by a few actors and funders, as no single individual can adequately manifest the epistemic and ethical diversity we deem necessary.” which would hopefully go someway to make sure that more forms of pluralism can assert themselves. Obviously, though, this won’t be perfect, and we will have to create spaces where voices that may previously not have been heard, because they don’t have all the money or aren’t loud and assertive, would get heard; this will be hard, and will definitely be difficult for someone like me who is clearly quite loud and likes to get my opinion out there.
NB: (I would also like to comment, and I really don’t want to be antagonistic to John as I do deeply respect him, but I do think his representation of ‘CSER-type heterodoxy’ or at least how he’s framed it with his two chief examples being me and Luke seems to me to be a misrepresentation. I know this may be arguing back too much, but given he’s said I believe something I don’t, I think its important to put the record straight (I’d hope its unintentional, although we have actually spoken a lot about my views))
This comment seems to violate EA forum norms, particularly by assuming very bad faith from the original poster (e.g. “these claims smell especially untrustworthy” and “I don’t think these arguments are transparent.”). These comments made certainly have very creative interpretations of the original post.
I believe you’re aware that signatories such as Anders Sandberg and SJ Beard are not advocating for “folding EA into extinction rebellion”—an extremely outlandish claim and accusation.
Many of the comments made give untrue interpretations of the original statement: which substantively states that the very young academic field of existential risk has a lot to learn from other academic fields, such as disaster risk literature or science and technology studies. I believe this is a reasonable perspective, hence I agree with the original post.
And it’s absolutely possible to have a plurality of ideas from different academic fields while drawing a line for “homophobes, Trump supporters, and people who want China to invade Taiwan”.
Trump supporters and homophobes are easy to rule out if you assume that the only way to be valid or useful in expectation is to go to college. Which, fine, whatever, but it does violate the spirit of the thing in a way that I’d hope is obvious.
Firstly John, before I address the ( interesting and useful) substantive point you make here, I think the first paragraph is clearly blatantly false. Firstly, you accuse all these signatories and authors of not being transparent. This is a deeply disrespectful accusation of bad faith, which is clearly untrue. Please recind this statement or I will be reporting you to the forum team for breaking forum norms.
I don’t speak for every signatory, although I would urge you to look at the list of signatories; does this look to you like a group of people who want to fold EA into XR. The letter is explicitly talking about ERS as a field and not just EA; it wants to create a more pluralistic field with EA engaging as a part of this more pluralistic and diverse field, not replace EA with this field. It is also not about making ERS just CSER-type approaches by any margin. Also, please point me where in the letter you see any calls for things that look at all like ‘Folding EA into extinction rebellion’; we are talking about a plurality of methods to study XRisk, and a plurality of individuals and visions of the future to do this, and a community that best supports this, and I can’t think this misreading of our letter comes from anywhere else but pattern matching to other pieces rather than actually engaging with what is said. Please correct this.
Your also are simply wrong about ‘Folding EA into extinction rebellion’ is the ‘main aim of heterdox CSER-type approaches to EA’; the point of this letter isn’t to advance CSER, but I think this blatant misinformation ought to be corrected. How is ‘Governing Boring Apocalypses (https://www.sciencedirect.com/science/article/pii/S0016328717301623) or ’Classifying Global Catastrophic Risks https://www.sciencedirect.com/science/article/pii/S0016328717301957‘, two classic works in the CSER-type approaches folding EA into extinction rebellion? Even the work on climate and xrisk, such as ‘Climate Endgame’ of ‘Climate Change’s contribution to GCR’ has very little to do with XR. Again, unless you can provide me evidence for your point that is not a gross misrepresentation of a large body of work by a large amount of people, please recind and correct your statement.
Onto your more substantive point; I basically agree this is an issue with calls for diversity; ultimately the tent has to stop somewhere, and that somewhere will clearly be ‘politically (not necessarilly party political) charged’. I agree, I don’t want racists, homophobes or ableists in the ERS community. I think negotitating what the boundaries of this community is and should be is a really difficult task, and indeed, in many ways I think you undersell it. Why shouldn’t we include people who think that God will bring about the apocalypse? Indeed, focusing on a variety of methods was also a key part, so how do we constrain what methods we should use, and how we can decide if two methods disagree what to do? I think this problem is tricky, and is definitely something that will need to be iterated on, negotiated and researchered and understood conceptually more. But you here throw the baby out with the bathwater; its a deeply unsastifying solution when we have a good reason to have pluralism of method, vision of the future, epistemology and also greater diversities of many different factors to suggest that just because it may be possible to justify the inclusion of those you don’t like on this logic, then we have to throw out the entire argument full stop.
Sorry but I won’t rescind my comment. I don’t know whether it is conscious lack of transparency or not, but it is not transparent, in my opinion. This is also indicated by Quinn above, and in Larks’ comment. The dialectic on these posts goes:
A categorical statement is made that ‘diversity is a strength’ or ‘diversity of all kinds is always good’.
Myself or someone else presents a counterexample—eg note there are lots of homophobes, nationalists, Trump supporters etc who are underrepresented in EA
The OP concedes in the comments that diversity of some kinds is sometimes bad, or doesn’t respond.
A new post is released some time later repeating 1.
I have made point 2 to you several times on previous posts, but in this post you again make a categorical claim that ‘diversity is a strength’ and that we need to move towards greater pluralism, when you actually endorse ‘diversity is sometimes a strength, sometimes a weakness’. Like, in this post you say we need to take on ‘non-Western-perspectives’, but among very popular non-Western perspectives are homophobia and the idea that China should invade Taiwan, which you immediately disavow in the comments.
But you here throw the baby out with the bathwater; its a deeply unsastifying solution when we have a good reason to have pluralism of method, vision of the future, epistemology and also greater diversities of many different factors to suggest that just because it may be possible to justify the inclusion of those you don’t like on this logic, then we have to throw out the entire argument full stop.
I think the issue here is that it is incumbent upon you to provide criteria for how much diversity we want, otherwise your post has no substantive content because everyone already agrees that some forms of diversity are good and some are bad. The main post says/strongly gives the impression that more diversity of all kinds is always good because there is something about diversity itself that is good. In the comments, you walk back from this position.
Correct me if I am wrong, but my understanding is that diversity is being used to defend the proposition that EA should engage in non-merit-based hiring that is biased with respect to race, gender, ability, nation, and socioeconomic status.
all of this entails greater geographic, socio-economic, cultural, gender, racial and ability diversity, both in terms of those who may have interest in being a part of the community, and those whom the community may learn from.
I think this would be unfair, and strongly disagree that this would ‘create a culture where a genuine proliferation of evidence-based insights can occur’. The diversity considerations you mention in the post also cannot defend it since they cannot distinguish good and bad forms of diversity.
My claim was “Folding EA into extinction rebellion, which as I understand is the main aim of heterodox CSER-type approaches in EA”. I would guess that you and (eg) Kemp would be happy with this, for instance. CSER researchers like Dasgupta have collaborated papers with Paul Ehrlich who I think would also endorse this vibe, so I would guess Dasgupta is at least sympathetic. I basically think what I said is broadly correct, and I don’t think there is much reason for me to correct the record. I would actually be interested in some sort of statement/poll from different groups in x-risk studies about their beliefs about the world.
Sorry to revisit this, and I understand if you don’t. I must apologies if my previous comments felt a bit defensive from my side, as I do feel your statements towards me were untrue, but I think I have more clarity on the perspective you’ve come from and some of the possible baggage brought to this conversation, and I’m truly sorry if I’ve be ignorant of relevant context.
I think this comment is more going to address the overall conversation between us two on here, and where I perceive it to have gone, although I may be wrong, and I am open to corrections.
Firstly, I think you have assumed this statement is essentially a product of CSER, perhaps because it has come from me, who was a visit at CSER, and has been similarly critical of your work in a way that I know some at CSER have. [I should say, for the record on this, I do think your work is of high quality, and I hope you’ve never got the impression that I don’t. Perhaps some of my criticisms last year towards the review process your report went through felt poor quality (and I can’t remember what they were and may not stand by them today), but if so, I am sorry.] Nonetheless, I think its really important to keep in mind that this statement is absolutely not a ‘CSER’ statement; I’d like to remind you of the signatories, and whilst every signatory doesn’t agree with everything, I hope you can see why I got so defensive when you claimed that the signatories weren’t being transparent and actually attempting to just make EA another left-wing movement. I tried really hard to get a plurality of voices in this document, which is why such an accusation offended me, but ultimately I shouldn’t have got defensive over this, and I must apologise.
Secondly, on that point, I think we may have been talking about different things when you said ‘heterodox CSER approaches to EA.’ Certainly, I think Ehrlich and much of what he has called for is deeply morally reprehensible, and the capacity for ideas like his to gain ground is a genuine danger of pluralistic xrisk, because it is harder to police which ideas are acceptable or not (similarly, I have recieved criticism because this letter fails to call out eugenics explicitly, another danger). Nonetheless, I think we can trust as a more pluralistic community develops it would better navigate where the bounds of acceptable or unacceptable views and behaviours are, and that this would be better than us simply suggesting this now. Maybe this is a crux we/the signatories and much of the commens section disagree on. I think we can push for more pluralism and diversity in response to our situation whilst trusting that the more pluralistic ERS community will police how far this can go. You disagree and think we need to lay this out now otherwise it will either a) end up with anything goes, including views we find moral reprehensible or b) will mean EA is hijaked by the left. I think the second argument is weaker, particularly because this statement is not about EA, but about building a broader field of Existential Risk Studies, although perhaps you see this as a bit of a trojan horse. I understand I am missing some of the historical context that makes you think it is, but I hope that the signatories list may be enough to show you that I really do mean what I say when I call for pluralism.
I also must apologise if the call for retraction of certain parts of your comment seemed uncollegiate or disrespectful to you; this was certainly not my intention. I, however, felt that your painting of my views was incorrect, and thought you may, in light of this, be happy to change; although given you are not happy to retract, I assume you are either trying to make the argument that these are in fact my underlying beliefs (or that I am being dishonest, although I have no reason to suspect you would say this!).
I think there are a few more substantive points we disagree on, but to me this seems like the crux of the more heated discussion, and I must apologise it got so heated
Thanks for these comments and for the discussion. I do genuinely appreciate discussing things with you—I appreciate the directness and willingness to engage. I also appreciate that given how direct we both are and how rude I sometimes am/seem on here, it can create tension, and that is mainly my fault here.
I think my cruxes are:
I suppose my broader point is that EA is <1% of social movements ‘trying to do social good’ in some broad sense. >98% of the remainder is focused on broadly ‘do what sounds good’ vibes, with a left wing valence, i.e. work on climate change, rich country education, homelessness, identity politics type stuff etc. Over the years, I have seen many proposals to make EA more like the remainder, or even just make it exactly the same as the remainder, in the name of diversity or pluralism.
This strikes me as an Orwellian use of those terms. I don’t think it would in any way create more pluralism or diversity to have EA shift in the direction of doing that kind of stuff. EA offers a distinctive perspective and I think it is valuable to have that in the marketplace of ideas to actually provide a challenge to what remains the overwhelmingly dominant form of thinking about ‘trying to do good’.
I also view the >98% as very epistemically closed; I don’t think they are a good advert for an epistemic promised land for EAs.
There is a powerful social force that I do not understand which means that every organisation that is not explicitly right wing eventually becomes left wing, and I have seen that dynamic at play repeatedly over the last 13 years, and I would view this as the latest example. EA is not focused on areas I would view as particularly left or right valenced at the moment.
I am also very opposed to efforts to make hiring decisions according to demographic considerations. I think the instrumental considerations enumerated for doing this are usually weak on closer examination, and I think the commonsense idea that people who do best on work-related hiring criteria will be best at their job is fundamentally correct and the reason it is fundamentally correct are obvious. The idea that implicit bias against demographic groups could be driving demographic skews in EA also strikes me as extremely implausible. It is violently at odds with my lived experience of being on hiring panels or knowing about them at other organisations, and there being a very strong explicit bias against the typical EA demographic. The idea that implicit bias could be strong enough to overcome this is not credible.
I am aware that I am setting my precious social capital alight in making these arguments (which is, I think, a lesson in itself)
Once again, I think the accusation that we are not being transparent is deeply disingenuous.
If you agree that saying ‘diversity is a strength’ is equivalent to ‘diversity is always a strength and there are no problems increasing diversity in anyway then I can see your concern; I’m pretty confused how this is your assumption of what we mean, and to me is far from the common usage of the phrase. But yes, I agree even if our epistemic situation demands diversity, there are ways this could go wrong, and its not an easy problem ‘where the tent stops’, and whilst it is a very important conversation to have and to negotiate, I too often think that having this conversation in response to any calls to diversify ends up doing much more harm than good.
Once again, these post is not talking about EA, and I’m not sure it’s particularly advocating for ‘non-merit based’ practices (some signatories may agree, some may not). One example of initiatives that could be done to increase demograohic diversity are efforts like magnify mentoring, or doing more outreach in developing countries, or funding initiatives in a broader geographic distribution, or even improving the advertising of projects and job positions. But sure, if we think increasing demographic diversity is important, we might want to have a conversation about other things that can be done.
Also, much of the diversity we speak about is about pluralism of method, core assumptions etc, which only have something to do with ’merit’if you are judging from already a very particular perspective, and it is having this singular perspective is one of the things we are arguing against.
On your final point, you have definitely entirely misrepresentation my position and I am shocked from the conversations we have had that you would come to this conclusion about my work. I’m also pretty surprised this would be your conclusion of Luke’s work as well, which has included everything from biosecurity work for the WHO, work on AI governance and work on climate change, but I don’t know how much of his stuff your reading. I can safely say Luke disagrees that ERS should basically just be XR. I know far less about Dasgupta’s work. Also, i really don’t understand how we can be seen as fully representative of CSER-style xrisk work either. I don’t quite understand how you can claim people hold beliefs, be counteracted, then fail to give evidence for your point whilst maintaining that you are right.
and to me is far from the common usage of the phrase
It’s pretty lowest common denominator to say “you should infer that we mean the good stuff and not the bad stuff, since we all intuitively agree on commonsensical differences between good and bad”. Affordable housing, degrowth, etc. Diversity doesn’t have to be one of those!
Sometimes “currently we should have more of this rather than less, there will be (non-edge) cases where the cost of more is worth it” can be reasonably obvious, even if it’s not obvious how much more, or what the least costly way to get more is, and for some specific proposals its unclear whether the benefits outweigh the costs.
Downvoted because I’m allergic to applause-lighting pluralism: I would like whitelists and blacklists, tell us which things are underrepresented but another look ought to be taken / tell us which things are underrrepresented and should remain underrepresented. Be specific.
Reminder that it’s ok to feel burned out by diversity complaints not being totally honest:
I’m not saying this statement/project in particular smells especially nontrustworthy in this way (I in fact think it’s better than most things in the reference class!), but it’s worth pointing out that my prior is pretty tuned and it would take a lot for me to get excited about this sort of thing.
Hi Quinn,
I think you may have strawmanned the case quite a lot here, likely unintentionally (so sorry if the strawman accusation comes off harsh). So let me clarify:
This statement is about ERS not about EA. We want to make a broad, thriving field which EA could and I hope would be a part of creating
I basically think this is wrong. Sure, I, for example, don’t want racists in my community (and I spoek about this to John). But this is a genuine attempt to make a community with a plurality of methods, visions of the futures, ways of doing things. We explicitly don’t want agreement, and we say as much. If you look at the signatories, these are people who hold a whole host of different views (although its probably more homogenous than I would like actually!)
This statement was based on a workshop, and the signatories are only drawn from there. There was large amounts of disagreement at the workshop about a bunch of things, and we definitely would want a space that could sustain this. Indeed, much of the point of setting up an ERS space is to facilitate active disagreement that we can learn from, rather than shut it down and replace one political view with another. So I essentially think your comment here is wrong
It’s not clear to me how you can exclude racists if you “explicitly don’t want agreement”. Presumably there are heuristics for knowing what’s overton-violating and what isn’t, but you need to be specific about how you can improve what people already do. I don’t think the idea of “applause lights” was strawmanning at all toward the top level post, but I see your views are more detailed and careful than that in comments.
Sorry about conflation between ERS and EA, I get it can be a different stream.
I’m not super incurious about certain methodological, sociology of science, metascience opportunities to improve the x-risk community. I just need to see the specifics! For example, I am incurious about frankfurty stuff because it tends to be rather silly, but presumably lots of people are working on peer review and career clout to fix it’s downsides (seems like an economics puzzle to me) and I’m very curious about that.
Firstly I should say the vagueness, whilst frustrating, ie there to both reflect and open up a discussion; it can appear ‘applause-lighty’ if one doesn’t recognise that this is the point of the statement, but I’m not quite sure how it does if you see tye statement as providing a statement of intent and justification for positions that ought to be debated, contested and refined.
On the topic of excluding racists, I think it is basically possible to do given how negatively it impacts a huge range of people from engaging with the community and doing good work. Whilst in my mind it’s clearly motivated by dedp concerns that the racism is unethical and the future that may emerge from a community that has racism so prevelant is deeply problematic, although I’m pretty sure it could still be justified from purely utilitarian perspectives due to the negative impact racism has on the community to function.
I think your demand for specifics here is both admirable, and I can give some that I agree with, and a little besides the point. One of the points we are making is the community at present is such that it doesn’t allow practitioners to explore the sorts of methods we could be using, and many important concepts or assumptions that could aid just aren’t present in this space at present (one example here is I bet if there had been a community containing active discussions around both geoengineering and AI then the idea of simply pausing or stopping AGI development would have been explored much early than it did in ERS at present). Now onto different conceptual bases. A few examples: we could use concepts from DRR like vulnerability or exposure much more, and I think a vulnerability focused account of xrisk would look very different. We could look at all the different xrisk cascades using causal mapping and find the best points of leverage in this. We could expand out an agents of doom like agenda to target those organisations that produce xrisk most effectively and study specifically how to reduce their power to cause risk the most. Black Swan approaches to xrisk may look further different. Or we may work from assumptions, either ethical (like RCG does) or epistemic (kind of like I do) that GCR is not very seperable from xrisk, so looking at cascades through critical systems may be deeply important. One may take as STS inspired approaches, either directly using methods from there (eg ANT or ethnograohy) or better utilising concepts. As said earlier, Id be interested in a Burkeian political philosophy of xrisk. These are just the research agendas i may be excited about, which is a very narrow slice of what is possible or optimal. The problem is none of these research agendas have the scope and airtime to develop, because current communal structures are ill designed to allow this to happen, and we think this move to pluralism which we lay out (and this forum clearly disagrees with) could allow for many very useful and exciting research agendas to form
Thanks for elaborating! Quick initials check:
DRR
RCG
STS (this one’s familiar, sorta on the tip of my mind, but not quite there)
ANT
Apologies, my bad! DRR= Disaster Risk Reduction RCG = Riesgos Catastroficos Globales STS = Science and Technology Studies/ Science, Technology and Society ANT= Actor Network Theory
I agree with this. All appeals to so-called ‘diversity’ I have seen on the forum have actually been appeals for EA to be folded into other left wing social movements. I don’t think these arguments are transparent. Folding EA into extinction rebellion, which as I understand is the main aim of heterodox CSER-type approaches in EA, is not a good way to increase diversity in the marketplace of ideas.
‘Differences are a strength’ implies ‘lets have a lot more homophobes, Trump supporters, and people who want China to invade Taiwan’. You don’t actually want this. What you actually want is diversity in the form favoured by the usual preoccupations of left wing thought—identity diversity and more left wing environmentalism.
John’s comment points to another interesting tension.
CSER was indeed intended to be pluralistic and to provide space for heterodox approaches. And the general ‘vibe’ John gestures towards (I take it he’s not intending to be fully literal here—please correct me if I’m misinterpreting, John) is certainly more present at CSER than at other Xrisk orgs. It is also a vibe that is regularly rejected as a majority position in internal CSER full-group discussions. However, some groups and lineages are much more activist and evangelical in their approach than others. Hence they crowd out other heterodoxies and create an outsized external footprint, which can further make it difficult for other heterodoxies to thrive (whether in a centre or community). The CSER-type heterodoxy John and (I suspect) much of EA is familiar with is one that much of CSER is indifferent to, or disagrees with to various degrees. Other heterodoxies are… quieter.
In creating a pluralistic ERS, some diversities (as discussed by others) will be excluded from the get go (perhaps for good reasons, I do not offer comment on this). Of those included/tolerated, some will be far better-equipped with the tools to assert themselves. Disagreements are good, but the field on which disagreements are debated is often not an even one. Figuring out how to navigate this would be one of the key challenges for the approach proposed, I would think.
in response to your first point, I think one of the hopes of creating a pluralistic xrisk community is so that different parts of the community actually understand what work and persepctives each are doing, rather than either characturing them/misrepresenting them (for example, I’ve heard people outside EA assuming all EA XRisk work is basically just what Bostrom says) or just not knowing what other have to say. Ultimately, I think the workshop that this statement came out of did this really well, and so I hope if there is desire to move towards a more pluralistic community (which, perhaps from this forum, there isn’t) then we would better understand each others persepctives and why we disagree, and gain value from this disagreement. One example here is I think I personally have gained huge value from my discussions with John Halstead on climate, and really trying to understand his position.
I agree on the last paragraph, and is definitely a tension we will have to try anda resolve over time. This is one of the reasons we spoke about “we suggest that the power to confer support for different approaches should be distributed among the community rather than allocated by a few actors and funders, as no single individual can adequately manifest the epistemic and ethical diversity we deem necessary.” which would hopefully go someway to make sure that more forms of pluralism can assert themselves. Obviously, though, this won’t be perfect, and we will have to create spaces where voices that may previously not have been heard, because they don’t have all the money or aren’t loud and assertive, would get heard; this will be hard, and will definitely be difficult for someone like me who is clearly quite loud and likes to get my opinion out there.
NB: (I would also like to comment, and I really don’t want to be antagonistic to John as I do deeply respect him, but I do think his representation of ‘CSER-type heterodoxy’ or at least how he’s framed it with his two chief examples being me and Luke seems to me to be a misrepresentation. I know this may be arguing back too much, but given he’s said I believe something I don’t, I think its important to put the record straight (I’d hope its unintentional, although we have actually spoken a lot about my views))
This comment seems to violate EA forum norms, particularly by assuming very bad faith from the original poster (e.g. “these claims smell especially untrustworthy” and “I don’t think these arguments are transparent.”). These comments made certainly have very creative interpretations of the original post.
I believe you’re aware that signatories such as Anders Sandberg and SJ Beard are not advocating for “folding EA into extinction rebellion”—an extremely outlandish claim and accusation.
Many of the comments made give untrue interpretations of the original statement: which substantively states that the very young academic field of existential risk has a lot to learn from other academic fields, such as disaster risk literature or science and technology studies. I believe this is a reasonable perspective, hence I agree with the original post.
And it’s absolutely possible to have a plurality of ideas from different academic fields while drawing a line for “homophobes, Trump supporters, and people who want China to invade Taiwan”.
Trump supporters and homophobes are easy to rule out if you assume that the only way to be valid or useful in expectation is to go to college. Which, fine, whatever, but it does violate the spirit of the thing in a way that I’d hope is obvious.
Firstly John, before I address the ( interesting and useful) substantive point you make here, I think the first paragraph is clearly blatantly false. Firstly, you accuse all these signatories and authors of not being transparent. This is a deeply disrespectful accusation of bad faith, which is clearly untrue. Please recind this statement or I will be reporting you to the forum team for breaking forum norms.
I don’t speak for every signatory, although I would urge you to look at the list of signatories; does this look to you like a group of people who want to fold EA into XR. The letter is explicitly talking about ERS as a field and not just EA; it wants to create a more pluralistic field with EA engaging as a part of this more pluralistic and diverse field, not replace EA with this field. It is also not about making ERS just CSER-type approaches by any margin. Also, please point me where in the letter you see any calls for things that look at all like ‘Folding EA into extinction rebellion’; we are talking about a plurality of methods to study XRisk, and a plurality of individuals and visions of the future to do this, and a community that best supports this, and I can’t think this misreading of our letter comes from anywhere else but pattern matching to other pieces rather than actually engaging with what is said. Please correct this.
Your also are simply wrong about ‘Folding EA into extinction rebellion’ is the ‘main aim of heterdox CSER-type approaches to EA’; the point of this letter isn’t to advance CSER, but I think this blatant misinformation ought to be corrected. How is ‘Governing Boring Apocalypses (https://www.sciencedirect.com/science/article/pii/S0016328717301623) or ’Classifying Global Catastrophic Risks https://www.sciencedirect.com/science/article/pii/S0016328717301957‘, two classic works in the CSER-type approaches folding EA into extinction rebellion? Even the work on climate and xrisk, such as ‘Climate Endgame’ of ‘Climate Change’s contribution to GCR’ has very little to do with XR. Again, unless you can provide me evidence for your point that is not a gross misrepresentation of a large body of work by a large amount of people, please recind and correct your statement.
Onto your more substantive point; I basically agree this is an issue with calls for diversity; ultimately the tent has to stop somewhere, and that somewhere will clearly be ‘politically (not necessarilly party political) charged’. I agree, I don’t want racists, homophobes or ableists in the ERS community. I think negotitating what the boundaries of this community is and should be is a really difficult task, and indeed, in many ways I think you undersell it. Why shouldn’t we include people who think that God will bring about the apocalypse? Indeed, focusing on a variety of methods was also a key part, so how do we constrain what methods we should use, and how we can decide if two methods disagree what to do? I think this problem is tricky, and is definitely something that will need to be iterated on, negotiated and researchered and understood conceptually more. But you here throw the baby out with the bathwater; its a deeply unsastifying solution when we have a good reason to have pluralism of method, vision of the future, epistemology and also greater diversities of many different factors to suggest that just because it may be possible to justify the inclusion of those you don’t like on this logic, then we have to throw out the entire argument full stop.
Sorry but I won’t rescind my comment. I don’t know whether it is conscious lack of transparency or not, but it is not transparent, in my opinion. This is also indicated by Quinn above, and in Larks’ comment. The dialectic on these posts goes:
A categorical statement is made that ‘diversity is a strength’ or ‘diversity of all kinds is always good’.
Myself or someone else presents a counterexample—eg note there are lots of homophobes, nationalists, Trump supporters etc who are underrepresented in EA
The OP concedes in the comments that diversity of some kinds is sometimes bad, or doesn’t respond.
A new post is released some time later repeating 1.
I have made point 2 to you several times on previous posts, but in this post you again make a categorical claim that ‘diversity is a strength’ and that we need to move towards greater pluralism, when you actually endorse ‘diversity is sometimes a strength, sometimes a weakness’. Like, in this post you say we need to take on ‘non-Western-perspectives’, but among very popular non-Western perspectives are homophobia and the idea that China should invade Taiwan, which you immediately disavow in the comments.
I think the issue here is that it is incumbent upon you to provide criteria for how much diversity we want, otherwise your post has no substantive content because everyone already agrees that some forms of diversity are good and some are bad. The main post says/strongly gives the impression that more diversity of all kinds is always good because there is something about diversity itself that is good. In the comments, you walk back from this position.
Correct me if I am wrong, but my understanding is that diversity is being used to defend the proposition that EA should engage in non-merit-based hiring that is biased with respect to race, gender, ability, nation, and socioeconomic status.
I think this would be unfair, and strongly disagree that this would ‘create a culture where a genuine proliferation of evidence-based insights can occur’. The diversity considerations you mention in the post also cannot defend it since they cannot distinguish good and bad forms of diversity.
My claim was “Folding EA into extinction rebellion, which as I understand is the main aim of heterodox CSER-type approaches in EA”. I would guess that you and (eg) Kemp would be happy with this, for instance. CSER researchers like Dasgupta have collaborated papers with Paul Ehrlich who I think would also endorse this vibe, so I would guess Dasgupta is at least sympathetic. I basically think what I said is broadly correct, and I don’t think there is much reason for me to correct the record. I would actually be interested in some sort of statement/poll from different groups in x-risk studies about their beliefs about the world.
Hi John,
Sorry to revisit this, and I understand if you don’t. I must apologies if my previous comments felt a bit defensive from my side, as I do feel your statements towards me were untrue, but I think I have more clarity on the perspective you’ve come from and some of the possible baggage brought to this conversation, and I’m truly sorry if I’ve be ignorant of relevant context.
I think this comment is more going to address the overall conversation between us two on here, and where I perceive it to have gone, although I may be wrong, and I am open to corrections.
Firstly, I think you have assumed this statement is essentially a product of CSER, perhaps because it has come from me, who was a visit at CSER, and has been similarly critical of your work in a way that I know some at CSER have. [I should say, for the record on this, I do think your work is of high quality, and I hope you’ve never got the impression that I don’t. Perhaps some of my criticisms last year towards the review process your report went through felt poor quality (and I can’t remember what they were and may not stand by them today), but if so, I am sorry.] Nonetheless, I think its really important to keep in mind that this statement is absolutely not a ‘CSER’ statement; I’d like to remind you of the signatories, and whilst every signatory doesn’t agree with everything, I hope you can see why I got so defensive when you claimed that the signatories weren’t being transparent and actually attempting to just make EA another left-wing movement. I tried really hard to get a plurality of voices in this document, which is why such an accusation offended me, but ultimately I shouldn’t have got defensive over this, and I must apologise.
Secondly, on that point, I think we may have been talking about different things when you said ‘heterodox CSER approaches to EA.’ Certainly, I think Ehrlich and much of what he has called for is deeply morally reprehensible, and the capacity for ideas like his to gain ground is a genuine danger of pluralistic xrisk, because it is harder to police which ideas are acceptable or not (similarly, I have recieved criticism because this letter fails to call out eugenics explicitly, another danger). Nonetheless, I think we can trust as a more pluralistic community develops it would better navigate where the bounds of acceptable or unacceptable views and behaviours are, and that this would be better than us simply suggesting this now. Maybe this is a crux we/the signatories and much of the commens section disagree on. I think we can push for more pluralism and diversity in response to our situation whilst trusting that the more pluralistic ERS community will police how far this can go. You disagree and think we need to lay this out now otherwise it will either a) end up with anything goes, including views we find moral reprehensible or b) will mean EA is hijaked by the left. I think the second argument is weaker, particularly because this statement is not about EA, but about building a broader field of Existential Risk Studies, although perhaps you see this as a bit of a trojan horse. I understand I am missing some of the historical context that makes you think it is, but I hope that the signatories list may be enough to show you that I really do mean what I say when I call for pluralism.
I also must apologise if the call for retraction of certain parts of your comment seemed uncollegiate or disrespectful to you; this was certainly not my intention. I, however, felt that your painting of my views was incorrect, and thought you may, in light of this, be happy to change; although given you are not happy to retract, I assume you are either trying to make the argument that these are in fact my underlying beliefs (or that I am being dishonest, although I have no reason to suspect you would say this!).
I think there are a few more substantive points we disagree on, but to me this seems like the crux of the more heated discussion, and I must apologise it got so heated
Thanks for these comments and for the discussion. I do genuinely appreciate discussing things with you—I appreciate the directness and willingness to engage. I also appreciate that given how direct we both are and how rude I sometimes am/seem on here, it can create tension, and that is mainly my fault here.
I think my cruxes are:
I suppose my broader point is that EA is <1% of social movements ‘trying to do social good’ in some broad sense. >98% of the remainder is focused on broadly ‘do what sounds good’ vibes, with a left wing valence, i.e. work on climate change, rich country education, homelessness, identity politics type stuff etc. Over the years, I have seen many proposals to make EA more like the remainder, or even just make it exactly the same as the remainder, in the name of diversity or pluralism.
This strikes me as an Orwellian use of those terms. I don’t think it would in any way create more pluralism or diversity to have EA shift in the direction of doing that kind of stuff. EA offers a distinctive perspective and I think it is valuable to have that in the marketplace of ideas to actually provide a challenge to what remains the overwhelmingly dominant form of thinking about ‘trying to do good’.
I also view the >98% as very epistemically closed; I don’t think they are a good advert for an epistemic promised land for EAs.
There is a powerful social force that I do not understand which means that every organisation that is not explicitly right wing eventually becomes left wing, and I have seen that dynamic at play repeatedly over the last 13 years, and I would view this as the latest example. EA is not focused on areas I would view as particularly left or right valenced at the moment.
I am also very opposed to efforts to make hiring decisions according to demographic considerations. I think the instrumental considerations enumerated for doing this are usually weak on closer examination, and I think the commonsense idea that people who do best on work-related hiring criteria will be best at their job is fundamentally correct and the reason it is fundamentally correct are obvious. The idea that implicit bias against demographic groups could be driving demographic skews in EA also strikes me as extremely implausible. It is violently at odds with my lived experience of being on hiring panels or knowing about them at other organisations, and there being a very strong explicit bias against the typical EA demographic. The idea that implicit bias could be strong enough to overcome this is not credible.
I am aware that I am setting my precious social capital alight in making these arguments (which is, I think, a lesson in itself)
Once again, I think the accusation that we are not being transparent is deeply disingenuous.
If you agree that saying ‘diversity is a strength’ is equivalent to ‘diversity is always a strength and there are no problems increasing diversity in anyway then I can see your concern; I’m pretty confused how this is your assumption of what we mean, and to me is far from the common usage of the phrase. But yes, I agree even if our epistemic situation demands diversity, there are ways this could go wrong, and its not an easy problem ‘where the tent stops’, and whilst it is a very important conversation to have and to negotiate, I too often think that having this conversation in response to any calls to diversify ends up doing much more harm than good.
Once again, these post is not talking about EA, and I’m not sure it’s particularly advocating for ‘non-merit based’ practices (some signatories may agree, some may not). One example of initiatives that could be done to increase demograohic diversity are efforts like magnify mentoring, or doing more outreach in developing countries, or funding initiatives in a broader geographic distribution, or even improving the advertising of projects and job positions. But sure, if we think increasing demographic diversity is important, we might want to have a conversation about other things that can be done.
Also, much of the diversity we speak about is about pluralism of method, core assumptions etc, which only have something to do with ’merit’if you are judging from already a very particular perspective, and it is having this singular perspective is one of the things we are arguing against.
On your final point, you have definitely entirely misrepresentation my position and I am shocked from the conversations we have had that you would come to this conclusion about my work. I’m also pretty surprised this would be your conclusion of Luke’s work as well, which has included everything from biosecurity work for the WHO, work on AI governance and work on climate change, but I don’t know how much of his stuff your reading. I can safely say Luke disagrees that ERS should basically just be XR. I know far less about Dasgupta’s work. Also, i really don’t understand how we can be seen as fully representative of CSER-style xrisk work either. I don’t quite understand how you can claim people hold beliefs, be counteracted, then fail to give evidence for your point whilst maintaining that you are right.
It’s pretty lowest common denominator to say “you should infer that we mean the good stuff and not the bad stuff, since we all intuitively agree on commonsensical differences between good and bad”. Affordable housing, degrowth, etc. Diversity doesn’t have to be one of those!
Sometimes “currently we should have more of this rather than less, there will be (non-edge) cases where the cost of more is worth it” can be reasonably obvious, even if it’s not obvious how much more, or what the least costly way to get more is, and for some specific proposals its unclear whether the benefits outweigh the costs.
Hi John Since I’ve corrected you that neither me nor Luke would agree with your characterisation of our positions, would you mind correcting this?