I direct the AI:Futures and Responsibility Programme (https://www.ai-far.org/) at the University of Cambridge, which works on AI strategy, safety and governance. I also work on global catastrophic risks with the Centre for the Study of Existential Risk and AI strategy/policy with the Centre for the Future of Intelligence.
Sean_o_h
Thanks Linch. I’d had
P1: People in X are racistin mind in terms of “serious claim, not to be made lightly”, but I acknowledge your well-made points re: burden of proof on the latter.
I also worry about distribution of claims in terms of signal v noise. I think there’s a lot of racism in modern society, much of it glaring and harmful, but difficult to address (or sometimes out of the overton window to even speak about). I don’t think matters are helped by critiques that go to lengths to read racism into innocuous texts, as the author of one of the critiques above has done in my view (in other materials, and on social media).
Thanks Halstead. I’ll try to respond later, but I’d quickly like to be clear re: my own position that I don’t perceive longtermism as racist, and/or am not claiming people within it are racist (I consider this a serious claim not to be made lightly).
I agree the racism critique is overstated, but I think there’s a more nuanced argument for a need for greater representation/inclusion for xrisk reduction to be very good for everyone.
Quick toy examples (hypothetical):
- If we avoid extinction by very rich, nearly all white people building enough sustainable bunkers, human species continues/rebuilds, but not good for non-white people.
- If we do enough to avoid the xrisk scenarios (say, getting stuck at the poles with minimal access to resources needed to progress civilisation or something) in climate change, but not enough to avoid massively disadvantaging most of the global south, we badly exacerbate inequality (maybe better than extinction, but not what we might consider a good outcome).
And so forth. So the more nuanced argument might be we (a) need to avoid extinction, but (b) want to do so in such a way that we don’t exacerbate inequality and other harms. We stand a better chance of doing the latter by including a wider array of stakeholders than are currently in the conversation.
Me too.
There has been a relationship and active discussions between people in the relevant parts of the UN , and researchers at Xrisk orgs (FHI, CSER, FLI and others) including myself and (as noted above) Toby Ord for 5+ years. I believe the SG’s taken an active interest. I’m not sure what is appropriate for me to say on a public forum, but I’d be happy to discuss offline.
I think there are a few other considerations that may point in the direction of slightly higher salaries (or at least, avoiding very low salaries). EA skews young in age as a movement, but this is changing as people grow up ‘with it’ or older people join. I think this is good. It’s important to avoid making it more difficult for people to join/remain involved who have other financial obligations that come in a little later in life, e.g.
- child-rearing
- supporting elderly parents
Relatedly, lower salaries can be easier to accept for longer for people who come from wealthier backgrounds and have better-off social support networks or expectations of inheritance etc (it can feel very risky if one is only in a position to save minimally, and not be able to build up rainy day funds for unexpected financial needs otherwise).
Very cool, thanks! Relatedly, you might be interested in this literature-scanning approach for Xrisk/GCR literature—doesn’t provide cool graphs like this, but scans the literature being released for papers with potential relevance to GCR using an ML ‘recommendation engine’ trained based on assessments of papers by various researchers in the field. You can sign up for a monthly digest of papers.
https://www.sciencedirect.com/science/article/pii/S0016328719303702?via%3Dihub
https://www.x-risk.net/methods/
FYI, more info about the scenario game available here:
https://intelligencerising.org/
Writeup of game from participants:
https://www.lesswrong.com/posts/ywKhqjgdKuHoQohYL/takeaways-from-the-intelligence-rising-rpg
[edit; just found their donation page and they recieved considerable OpenPhil support, so not a candidate based on the OP’s criteria].
One Day Sooner seem like a candidate. I don’t know if they’ve received EA support, although at least one EA (David Manheim) works with them. I think they’ve done good work in bringing attention and legitimacy to the idea of human challenge trials in a pandemic situation, which seems plausibly like a very important thing for future pandemics.
https://www.1daysooner.org/
With regard to the people mentioned, neither are forum regulars, and my understanding is that neither have plans for continued collaborations with Phil.
I appreciate that these kinds of moderation decisions can be difficult, but I also don’t agree with the warning to Halstead. And if it is to be given, then I am uncomfortable that Halstead has been singled out—it would seem consistent to apply the same warning to me, as I supported Halstead’s claims, and added my own, both without providing evidence.
+1; BERI have been a brilliant support. Strongly recommend applying!
Thank you.
I don’t know how to embed snapshots, but anyone who wishes is welcome to type “phil torres” into linkedin or email me for the snapshots I’ve just taken right now—it brings up “Researcher at Centre for the Study of Existential Risk, University of Cambridge”. As I say, it’s unclear if this is deliberate—it may well be an oversight, but it has contributed to the mistaken external impression that Phil Torres is or was research staff at CSER.
+1 from my own experience in debate (also british style). Truth-seeking / identifying and meaningfully resolving points of disagreement between different positions is a very different skill to trying to win a debate, and the skillsets/mindsets developed in the latter seem like they might actively work against the former unless the people doing it are very careful and self-aware.
Related cause area: Deepfake dub-over all 80k podcasts so that they’re presented by David Attenborough for prestige gains.
EA projects should be evidence based: I’ve done a survey of myself, and the results conclusively show that if 80,000 hours produced dubstep remixes of its podcasts, I would actually listen to them. The results were even more conclusive when the question included “what if Wiblin spliced in ‘Wib-wib-wib’ noises whenever crucial considerations were touched on?”.
(Disclaimer: am co-director of CSER): EA is a strong influence at CSER, but one of a number. At a guess, I’d say maybe a third to a half of people actively engage with EA/EA-led projects (some ambiguity based on how you define), but a lot are coming from other academic backgrounds relevant to GCR and working in broader GCR contexts, and there’s no expectation or requirement to be invoved with EA. We aim to be a broad church in this regard.
Among our senior advisers/board, folks like Martin Rees and Jaan Tallinn engage more actively with EA. There’s been little Partha/EA engagement to my knowledge. (At least some of the conversations that would ultimately lead to there being a CSER predated EA’s existence). I think I’d agree with comments elsewhere that Partha’s work on biodiversity loss might be considered a lower priority through an EA lens than through some other lenses (e.g. ones that place ‘intrinsic value of biological diversity/ecosystem preservation’ more highly, or ones that place higher weight on sub-existential catastrophes or systemic vulnerabilities) although I’m glad to see it considered through an EA lens and will be interested to see EA perspectives on it.
Here is an article by Phil Torres arguing that the rise of Islam represents a very significant and growing existential risk.
I will quote a key paragraph:
“Consider the claim that there will be 2.76 billion Muslims by 2050. Now, 1% of this number equals 27.6 million people, roughly 26.2 million more than the number of military personnel on active duty in the US today. It follows that if even 1% of this figure were to hold “active apocalyptic” views, humanity could be in for a catastrophe like nothing we’ve ever experienced before.”
Firstly, this is nonsense. The proposition that 1% of Muslims would hold “active apocalyptic” views and be prepared to act on it is pure nonsense. And “if even 1%” suggests this is the author lowballing.
Secondly, this is fear-mongering against one of the most feared and discriminated-against communities in the West, being written for a Western audience.
Thirdly, it utilises another standard racism trope, population replacement—look at the growing number of scary ‘other’. They threaten to over-run the US’s good ’ol apple pie armed forces.
This was not a paragraph in a thesis. It was a public article, intended to reach as wide an audience as possible. It used to be prominently displayed on his now-defunct website. The article above was written several years more recently than Beckstead’s thesis.
I will say, to Torres’s credit, that his views on Islam have become more nuanced over time, and that I have found his recent articles on Islam less problematic. This is to be praised. And he has moved on from attacking Muslims to ‘critiquing’ right-wing Americans, the Atheist community, and the EA community. This is at least punching sidewards, rather than down.
But he has not subject his own body of work, or other more harmful materials, to anything like the level of critique that he has subjected Beckstead, Mogensen etc al. I consider this deeply problematic in terms of scholarly responsibility.
Re: Ireland, I don’t know much about this later shortage, but an alternative explanation would be lower population density / demand on food/agrarian resources. Not only did something like 1million people die during the great famine, but >1million emigrated; total population dropped a large amount.