Independent researcher on SRM and GCR/XRisk and on pluralisms in existential risk studies
Gideon Futerman
Best Practices for early-career Research Management
I think the most likely thing is that on a post like this the downvotes vs disagreevotes distinction isn’t very strong. Its suggestions, so one would upvite the suggestions one likes most, and downvote those you like least (to contribute to visibility). If this is the case, I think its pretty fair to be honest.
If not, then I can only posit a few potential reasons, but these all seem bad to me that I would assume the above is true:
People think 80K platforming people who think climate change could contribute to XRisk would be actively harmful (eg by distracting people from more important problems)
People think 80K platforming Luke (due to his criticism of EA- which I assume they think is wrong or bad faith) would be actively harmful, so it shouldn’t be considered
People think having a podcast specifically talking about what EA gets wrong about XRisk would be actively harmful (perhaps it would turn newbies off, so we shouldn’t have it)
People think suggesting Luke is trolling because they think their is no chance that 80K would platform him (this would feel very uncharitable towards 80K imo)
Christine Korsgaard on Kantian Approaches to animal welfare/ about her recent-ish book ‘Fellow Creatures’
Some of the scholars who’ve worked on Insects or Decapod and Pain/Sentience (Jonathan Birch, Meghan Barrett, Lars Chittika etc)
Bob Fischer on comparing interspecies welfare
Luke Kemp on:
Climate Change and Existential Risk
The role of Horizon Scans in Existential Risk Studies
His views on what EA gets wrong about XRisk
Deep Systems Thinking and XRisk.
Alternatively for another Climate Change and XRisk that would be narrower and less controversial/critical of EA than Luke is, Constantin Arnsschedit would be good
I think another discussion presenting SRM in the context of GCR might be good; there has now been a decent amount of research on this which probably proposes actions rather different from what SilverLining presents.
SilverLining is also decently controversial I the SRM community, so some alternative perspectives would probably be better than Kelly
Send me a DM if you’re interested, I’d be happy to provide a bunch of resources and to put you in contact with some people who could help send a bunch of resources
Hi John,
Sorry to revisit this, and I understand if you don’t. I must apologies if my previous comments felt a bit defensive from my side, as I do feel your statements towards me were untrue, but I think I have more clarity on the perspective you’ve come from and some of the possible baggage brought to this conversation, and I’m truly sorry if I’ve be ignorant of relevant context.
I think this comment is more going to address the overall conversation between us two on here, and where I perceive it to have gone, although I may be wrong, and I am open to corrections.
Firstly, I think you have assumed this statement is essentially a product of CSER, perhaps because it has come from me, who was a visit at CSER, and has been similarly critical of your work in a way that I know some at CSER have. [I should say, for the record on this, I do think your work is of high quality, and I hope you’ve never got the impression that I don’t. Perhaps some of my criticisms last year towards the review process your report went through felt poor quality (and I can’t remember what they were and may not stand by them today), but if so, I am sorry.] Nonetheless, I think its really important to keep in mind that this statement is absolutely not a ‘CSER’ statement; I’d like to remind you of the signatories, and whilst every signatory doesn’t agree with everything, I hope you can see why I got so defensive when you claimed that the signatories weren’t being transparent and actually attempting to just make EA another left-wing movement. I tried really hard to get a plurality of voices in this document, which is why such an accusation offended me, but ultimately I shouldn’t have got defensive over this, and I must apologise.
Secondly, on that point, I think we may have been talking about different things when you said ‘heterodox CSER approaches to EA.’ Certainly, I think Ehrlich and much of what he has called for is deeply morally reprehensible, and the capacity for ideas like his to gain ground is a genuine danger of pluralistic xrisk, because it is harder to police which ideas are acceptable or not (similarly, I have recieved criticism because this letter fails to call out eugenics explicitly, another danger). Nonetheless, I think we can trust as a more pluralistic community develops it would better navigate where the bounds of acceptable or unacceptable views and behaviours are, and that this would be better than us simply suggesting this now. Maybe this is a crux we/the signatories and much of the commens section disagree on. I think we can push for more pluralism and diversity in response to our situation whilst trusting that the more pluralistic ERS community will police how far this can go. You disagree and think we need to lay this out now otherwise it will either a) end up with anything goes, including views we find moral reprehensible or b) will mean EA is hijaked by the left. I think the second argument is weaker, particularly because this statement is not about EA, but about building a broader field of Existential Risk Studies, although perhaps you see this as a bit of a trojan horse. I understand I am missing some of the historical context that makes you think it is, but I hope that the signatories list may be enough to show you that I really do mean what I say when I call for pluralism.
I also must apologise if the call for retraction of certain parts of your comment seemed uncollegiate or disrespectful to you; this was certainly not my intention. I, however, felt that your painting of my views was incorrect, and thought you may, in light of this, be happy to change; although given you are not happy to retract, I assume you are either trying to make the argument that these are in fact my underlying beliefs (or that I am being dishonest, although I have no reason to suspect you would say this!).
I think there are a few more substantive points we disagree on, but to me this seems like the crux of the more heated discussion, and I must apologise it got so heated
Hi John Since I’ve corrected you that neither me nor Luke would agree with your characterisation of our positions, would you mind correcting this?
in response to your first point, I think one of the hopes of creating a pluralistic xrisk community is so that different parts of the community actually understand what work and persepctives each are doing, rather than either characturing them/misrepresenting them (for example, I’ve heard people outside EA assuming all EA XRisk work is basically just what Bostrom says) or just not knowing what other have to say. Ultimately, I think the workshop that this statement came out of did this really well, and so I hope if there is desire to move towards a more pluralistic community (which, perhaps from this forum, there isn’t) then we would better understand each others persepctives and why we disagree, and gain value from this disagreement. One example here is I think I personally have gained huge value from my discussions with John Halstead on climate, and really trying to understand his position.
I agree on the last paragraph, and is definitely a tension we will have to try anda resolve over time. This is one of the reasons we spoke about “we suggest that the power to confer support for different approaches should be distributed among the community rather than allocated by a few actors and funders, as no single individual can adequately manifest the epistemic and ethical diversity we deem necessary.” which would hopefully go someway to make sure that more forms of pluralism can assert themselves. Obviously, though, this won’t be perfect, and we will have to create spaces where voices that may previously not have been heard, because they don’t have all the money or aren’t loud and assertive, would get heard; this will be hard, and will definitely be difficult for someone like me who is clearly quite loud and likes to get my opinion out there.
NB: (I would also like to comment, and I really don’t want to be antagonistic to John as I do deeply respect him, but I do think his representation of ‘CSER-type heterodoxy’ or at least how he’s framed it with his two chief examples being me and Luke seems to me to be a misrepresentation. I know this may be arguing back too much, but given he’s said I believe something I don’t, I think its important to put the record straight (I’d hope its unintentional, although we have actually spoken a lot about my views))
I’m honestly rather confused with how people can disagree vote with this. I’d I get these stats wrong?
Apologies, my bad! DRR= Disaster Risk Reduction RCG = Riesgos Catastroficos Globales STS = Science and Technology Studies/ Science, Technology and Society ANT= Actor Network Theory
Some of those definitions definitely do, and some probablg do, so yes! However, I think there is a valid question is studying S-Risks gains much from being part of the ERS community or whether it would be more beneficial to be its own thing. I’m unsure (genuinely) how much either side gains from its involvement (maybe a lot!) In general, to me the two areas that may be commonly associated with xrisk that I don’t know how useful it is for it to be fully in ERS (although in conversation with definitely helps) is pure technical alignment (when divorced from AI Strategy) and maybe srisks, but I’m pretty unsure of this take, and many signatories would probably disagree.
Once again, I think the accusation that we are not being transparent is deeply disingenuous.
If you agree that saying ‘diversity is a strength’ is equivalent to ‘diversity is always a strength and there are no problems increasing diversity in anyway then I can see your concern; I’m pretty confused how this is your assumption of what we mean, and to me is far from the common usage of the phrase. But yes, I agree even if our epistemic situation demands diversity, there are ways this could go wrong, and its not an easy problem ‘where the tent stops’, and whilst it is a very important conversation to have and to negotiate, I too often think that having this conversation in response to any calls to diversify ends up doing much more harm than good.
Once again, these post is not talking about EA, and I’m not sure it’s particularly advocating for ‘non-merit based’ practices (some signatories may agree, some may not). One example of initiatives that could be done to increase demograohic diversity are efforts like magnify mentoring, or doing more outreach in developing countries, or funding initiatives in a broader geographic distribution, or even improving the advertising of projects and job positions. But sure, if we think increasing demographic diversity is important, we might want to have a conversation about other things that can be done.
Also, much of the diversity we speak about is about pluralism of method, core assumptions etc, which only have something to do with ’merit’if you are judging from already a very particular perspective, and it is having this singular perspective is one of the things we are arguing against.
On your final point, you have definitely entirely misrepresentation my position and I am shocked from the conversations we have had that you would come to this conclusion about my work. I’m also pretty surprised this would be your conclusion of Luke’s work as well, which has included everything from biosecurity work for the WHO, work on AI governance and work on climate change, but I don’t know how much of his stuff your reading. I can safely say Luke disagrees that ERS should basically just be XR. I know far less about Dasgupta’s work. Also, i really don’t understand how we can be seen as fully representative of CSER-style xrisk work either. I don’t quite understand how you can claim people hold beliefs, be counteracted, then fail to give evidence for your point whilst maintaining that you are right.
Firstly I should say the vagueness, whilst frustrating, ie there to both reflect and open up a discussion; it can appear ‘applause-lighty’ if one doesn’t recognise that this is the point of the statement, but I’m not quite sure how it does if you see tye statement as providing a statement of intent and justification for positions that ought to be debated, contested and refined.
On the topic of excluding racists, I think it is basically possible to do given how negatively it impacts a huge range of people from engaging with the community and doing good work. Whilst in my mind it’s clearly motivated by dedp concerns that the racism is unethical and the future that may emerge from a community that has racism so prevelant is deeply problematic, although I’m pretty sure it could still be justified from purely utilitarian perspectives due to the negative impact racism has on the community to function.
I think your demand for specifics here is both admirable, and I can give some that I agree with, and a little besides the point. One of the points we are making is the community at present is such that it doesn’t allow practitioners to explore the sorts of methods we could be using, and many important concepts or assumptions that could aid just aren’t present in this space at present (one example here is I bet if there had been a community containing active discussions around both geoengineering and AI then the idea of simply pausing or stopping AGI development would have been explored much early than it did in ERS at present). Now onto different conceptual bases. A few examples: we could use concepts from DRR like vulnerability or exposure much more, and I think a vulnerability focused account of xrisk would look very different. We could look at all the different xrisk cascades using causal mapping and find the best points of leverage in this. We could expand out an agents of doom like agenda to target those organisations that produce xrisk most effectively and study specifically how to reduce their power to cause risk the most. Black Swan approaches to xrisk may look further different. Or we may work from assumptions, either ethical (like RCG does) or epistemic (kind of like I do) that GCR is not very seperable from xrisk, so looking at cascades through critical systems may be deeply important. One may take as STS inspired approaches, either directly using methods from there (eg ANT or ethnograohy) or better utilising concepts. As said earlier, Id be interested in a Burkeian political philosophy of xrisk. These are just the research agendas i may be excited about, which is a very narrow slice of what is possible or optimal. The problem is none of these research agendas have the scope and airtime to develop, because current communal structures are ill designed to allow this to happen, and we think this move to pluralism which we lay out (and this forum clearly disagrees with) could allow for many very useful and exciting research agendas to form
I do basically agree that we should have people who we would say are on the right (hence my suggestion) and I can see why my previous comment may come across dismissive (apologies), I just don’t agree that breaking it down to party politics(eg Republican) or left-right given how broad it is is necessarily that useful. (Again, please remember I’m not just talking about EA but ERS, but I think your point still applies)
So I think you are pointing to something real here, although even then I don’t think it actually constitutes a great defence of the status quo of ERS. If we are to get people with preexisting experience in the disciplines we want, as most disciplines are far from equal, and if you are straying into an interdisciplinary soace like this job security and lack of mentorship may mean being more senior is very benefiical, we are likely to not get the sorts of demographic diversity we may want. However, ERS at present also rarely included and reaches out to people with deep expertise in these fields as well, so in some ways it feels we get the worst of both worlds; we bring in relatively young and inexperienced people and yet churn out people who broadly think very similarly and have the ability to influence very similar spaces . Sure, I’m not saying juggling all this is easy, nor that it will be perfect, but this status quo seems really suboptimal
I’m not quite sure how to address most of your comment, as I think in many ways we are critiquing much of the underlying logic of how we evaluate what we ought to be funded. Its essentially suggesting, not really from a place of justice at all, that the current structures fail to optimise or assess the things that are useful for the community. And I think it may also be implicitly suggesting that centralising this power as is done currently is counter to the aims, and so power in the community ought to be better distributed. One response would be to change these criteria to better optimise for things that we may think are more valuable eg novelty of ideas/approach. Another would be to randomise funding ie every proposal that crosses a certain bar gets entered into a lottery. Ultimately, I’m (not necessarily the other signatories)pretty comfortable making the argument for this on purely utilitarian grounds, although some people may feel a pull towards talking in terms of justice; a community that have metrics and evaluations that better encourage scientific creativity and a pluralism of ideas and approaches will be better off, and evaluation criteria that optimise for these sorts of pluralism are considerable better than the status quo.
I also worry that your focus is essentially on individual epistemics over a small range of positions, rather than the overall questions of funding structures and where it goes. For example, Open Phil could have decided to fund fellowships in developing countries rather than ERA in Cambridge, where I can’t imagine the applicants are any less good but didn’t. I think this expanding of geographic diversity may be a realtively easy win, although definitely isn’t the only criteria we could have followed. (Also, as a side point, I would suggest there is good cost-effectiveness reasons to do this as well; for example, the DEGREES initiative (that OpenPhil funds), has now got the largest geoengineering research team in the world, all based in Developing Countries, with significant impact now on informing policy, for a cost that was less than the budget of ERA I believe. Here, increasing geographic diversity must have played into their decision, again purely from a utilitarian and not justice perspectuve. )
In response to the trade off stuff, to some extent this is true, although I’m unconvinced this is that different to the trade offs we discuss. In response to your examples, I would say that the proposals we make are definitely promoting ‘novelty of ideas’ much heavier than the status quo, and indeed we are making a clear trade off between more ‘novel ideas’ and more depth along the current ‘orthodox’ research paths
Hi Quinn,
I think you may have strawmanned the case quite a lot here, likely unintentionally (so sorry if the strawman accusation comes off harsh). So let me clarify:
This statement is about ERS not about EA. We want to make a broad, thriving field which EA could and I hope would be a part of creating
I basically think this is wrong. Sure, I, for example, don’t want racists in my community (and I spoek about this to John). But this is a genuine attempt to make a community with a plurality of methods, visions of the futures, ways of doing things. We explicitly don’t want agreement, and we say as much. If you look at the signatories, these are people who hold a whole host of different views (although its probably more homogenous than I would like actually!)
This statement was based on a workshop, and the signatories are only drawn from there. There was large amounts of disagreement at the workshop about a bunch of things, and we definitely would want a space that could sustain this. Indeed, much of the point of setting up an ERS space is to facilitate active disagreement that we can learn from, rather than shut it down and replace one political view with another. So I essentially think your comment here is wrong
I feel in a number of areas this post relies on the concept of AI being constructed/securitised in a number of ways that seem contradictory to me. (By constructed, I am referring to the way the technology is understood, percieved and anticipated, what narratives it fits into and how we understand it as a social object. By securitised, I mean brought into a limited policy discourse centred around national security that justifies the use of extraordinary measures (eg mass surveillance or conflict) to combat, concerned narrowly with combatting the existential threat to the state, which is roughly equal to the government, states territory and society. )
For example, you claim that hardware would be unlikely to be part of any pause effort, which would imply that AI is constructed to be important, but not necessarily exceptional (perhaps akin to climate change). This is also likely what would allow companies to easily relocate without major issues. You then claim it is likely international tensions and conflict would occur over the pause, which would imply thorough securitisation such that breaching a pause would be considered a threat enough to national security that conflict could be counternanced; therefore exceptional measures to combat the existential threat are entirely justified(perhaps akin to nuclear weapons or even more severe). Many of your claims of what is ‘likely’ seem to oscillate between these two conditions, which in a single juristiction seem unlikely to occur simultaeously. You then need a third construction of AI as a technology powerful and important enough to your country to risk conflict with the country that has thoroughly securitised it. SImilarly there must be elements in the paused country that are powerful that also believe it is a super important technology that can be very useful, despite its thorough securitisation (or because of it; I don’t wish to project securitisation as necessarily safe or good! Indeed, the links to military development, which could be facilitated by a pasue, may be very dangerous indeed.)
You may argue back two points; either that whilst all the points couldn’t occur simultanously, they are all pluasible. Here I agree, but then the confidence in your language would need to be toned down. Secondly that these different constructions of AI may differ across juristictions, meaning that all of these outcomes are likely. This also seems certainly unlikely, as countries are impacted by each other; narratives do spread, particularly in an interconnected world and particularly if they are held by powerful actors. Moreover, if powerful states are anywhere close to risking conflict over this, other economic or diplomatic measures, would be utilised first, likely meaning the only countries that would continue to develop it would be those who construct it as a super important (those who didn’t would likely give into the pressure). In a world where the US or China construct the AI Pause as a vital matter of national security, middle ground countries in their orbit allowing its development would not be counternanced.
I’m not saying a variety of constructions are not plausible. Nor am I saying that we necessarily fall to the extreme painted in the above paragraph (honestly this seems unlikely to me, but if we don’t then a Pause by global cooperation seems more plausible). Rather, I am suggesting that as it stands your idea of ‘likely outcomes’, are, together, very unlikely to happen, as they rely on different worlds to one another.