I direct the AI:Futures and Responsibility Programme (https://www.ai-far.org/) at the University of Cambridge, which works on AI strategy, safety and governance. I also work on global catastrophic risks with the Centre for the Study of Existential Risk and AI strategy/policy with the Centre for the Future of Intelligence.
Sean_o_h
Though it’s interesting to note Justin’s fermi is not far off how one of Johns Hopkins’ CHS scenarios played out (coronavirus, animal origin, 65m deaths worldwide).
http://www.centerforhealthsecurity.org/event201/scenario.html
Note: this was NOT a prediction (and had some key differences including higher mortality associated with their hypothetical virus, and significant international containment failure beyond that seen to date with nCov)
Hmm. interesting. This goes strongly against my intuitions. In case of interest I’d be happy to give you 5:1 odds that this Fermi estimate is at least an order of magnitude too severe (for a small stake of up to £500 on my end, £100 on yours). Resolved in your favour if 1 year from now the fatalities are >1/670 (or 11.6M based on current world population); in my favour if <1/670.
(Happy to discuss/modify/clarify terms of above.)
Edit: We have since amended the terms to 10:1 (50GBP of Justin’s to 500GBP of mine).
Thanks Vaidehi, these are very good points.
I agree that SJ is more diffuse and less central—I think this is one of the reasons thinking of it in terms of a movement that one might ally with is a little unnatural to me. I also agree that EA is more centralised and purposeful.
Your point that about what level of discourse suggests what kind of engagement is also a good one. I think this also links to the issue that (in my view) it’s in the nature of EA that there’s a ‘thick’ and a ‘thin’ version of EA in terms of the people involved. Here ‘thick’ is a movement of people who self-identify as EA and see themselves as part a strong social and intellectual community, and who are influenced by movement leaders and shapers.
Then there’s a ‘thin’ version that includes people who might do one or multiple of the following (a) work in EA-endorsed cause areas with EA-compatible approaches (b) find EA frameworks and literature useful to draw on (among other frameworks) (c) are generally supportive of or friendly towards some or most of the goals of EA, without necessarily making EA a core part of their identity or seeing themselves as being part of a movement. With so many people who interact with EA working primarily in cause areas rather than ‘central movement’ EA per se, my sense is this ‘thin’ EA or EA-adjacent set of people is reasonably large.
It might make perfect sense for ‘thick EA’ leaders to think of EA vs SJ in terms of movements, alliances, and competition for talent. While at the same time, this might be a less intuitive and more uncomfortable way for ‘thin EA’ folk to see the interaction being described and playing out. While I don’t have answers, I think it’s worth being mindful that there may be some tension there.
I’ve been trying to figure out what I find a little uncomfortable about 1-3, as someone who also has links to both communities. I think it’s that I personally find it more productive to think about both as frameworks/bodies of work + associated communities, more so than movements, where it feels here like these are being described as tribes (one is presented as overall better than the other; they are presented as competing for talent; there should be alliances). I acknowledge however, that in both EA/SJ, there are definitely many who see these more in the movement/tribe sense.
Through my framing, I find it easier to imagine the kinds of constructive engagements I would personally like to see—e.g. people primarily thinking through lens A adopting valuable insights and methodologies from lens B (captured nicely in your point 4). But I think this comes back to the oft-debated question (in both EA and SJ) of whether EA/SJ is (a) a movement/tribe or (b) a set of ideas/frameworks/body of knowledge. I apologise if I’m misrepresenting any views, or presenting distinctions overly strongly; I’m trying to put my finger on what might be a somewhat subtle distinction, but one which I think is important in terms of how engagement happens.
On the whole I agree with the message that engaging constructively, embracing the most valuable and relevant insights, and creating a larger, more inclusive community is very desirable.
It can be difficult to figure out where the biggest marginal benefit will be, or even how to fully grok the landscape, when there’s already quite a lot happening in different domains. A few of us at CSER have been thinking of organising a workshop or hackathon bringing together climate researchers (science, policy, related tech) and leading EA thinkers to explore in more detail where the EA skillset, and interested individual EAs with a range of backgrounds, might best fit in and contribute most effectively. Would be interested in sounding out how much interest/value people would see in this.
I’ve spent quite a bit of time trying to discuss the matter privately with the main author of the white supremacy critique, as I felt the claim was v unfair in a variety of ways and know the writer personally. I do not believe I have succeeded in bringing them round. I think it’s likely that there will be a journal article making this case at some point in the coming year.
At that point a decision will need to be made by thinkers in the longtermist community re: whether it is appropriate to respond or not. (It won’t be me; I don’t see myself as someone who ‘speaks’ for EA or longtermism; rather someone whose work fits within a broad longtermist/EA frame).
What makes this a little complicated, in my view, is that there are (significantly) weaker versions of these critiques—e.g. relating to the diversity, inclusiveness, founder effects and some of the strategies within EA—that are more defensible (although I think EA has made good strides in relation to most of these critiques) and these may get tangled up with this more extreme claim among those who consider those weaker critiques valid.
While I am unsure about the appropriate strategy for the extreme claim, if and when it is publicly presented, it seems good to me to steelman and engage with the less unreasonable claims.
Worth noting the changes that are apparently going to be made in the UK civil service, likely by Cummings’ design, and which seem quite compatible with a lot of rationalist thinking.
More scientists in the civil service
Data science, systems thinking, and superforecasting training prioritised.
https://twitter.com/JohnRentoul/status/1212486981713899520
A quick note: from googling longtermism, the hyphenated version (‘long-termism’) is already in use, particularly in finance/investment contexts, but in a way that in my view is broadly compatible with the use here, so I personally think it’s fine (Will’s version being much broader/richer/more philosophical in its scope, obviously).
long-termism in British English
NOUN
the tendency to focus attention on long-term gains
https://www.collinsdictionary.com/dictionary/english/long-termism
Examples:
I really enjoy the extent to which you’ve both taken the ball and run with it ;)
+2 helpful and thoughtful answers; really appreciate the time put in.
I agree this is a very helpful comment. I would add: these roles in my view are not *lesser* in any sense, for a range of reasons and I would encourage people not to think of them in those terms.
You might have a bigger impact on the margins being the only—or one of the first few—people thinking in EA terms in a philanthropic foundation than by adding to the pool of excellence at OpenPhil. This goes for any role that involves influencing how resources are allocated—which is a LOT, in charity, government, industry, academic foundations etc.
You may not be in the presidential cabinet, or a spad to the UK prime minister, but those people are supported and enabled by people building up the resources, capacity, overton window expansion elsewhere in government and civil service. The ‘senior person’ on their own may not be able to achieve purchase with key policy ideas and influence.
A lot of xrisk research, from biosecurity to climate change, draws on and depends on a huge body of work on biology, public policy, climate science, renewable energy, insulation in homes, and much more. Often there are gaps in research on extreme scenarios due to lack of incentives for this kind of work, and other reasons—and this may make it particularly impactful at times. But that specific work can’t be done well without drawing on all the underlying work. E.g., biorisk mitigation needs not just the people figuring out how to defend against the extreme scenarios, but also everything from people testing birds in vietnam for H5N1 and seals in the north sea for H7, to people planning for overflow capacity in regional hospitals, to people pushing for the value of preparedness funds in the reinsurance industry to much more. Same for climate+environment, same will be true for AI policy etc.
I think there’s probably a good case to be made that in many or perhaps most instances the most useful place for the next generally capable EA to be is *not* an EA org. And for all 80k’s great work, they can’t survey and review everything, nor tailor to personal fit for the thousands, or hundreds of thousands of different-skillset people who can play a role in making the future better.
For EA to really make the future better to the extent that it has the potential, it’s going to need a *much* bigger global team. And that team’s going to need to be interspersed everywhere, sometimes doing glamorous stuff, sometimes doing more standard stuff that is just as important in that it makes the glamorous stuff possible. To annoy everyone with a sports analogy, the defense and midfield positions are every bit as important as the glamorous striker positions, and if you’ve got a team made up primarily of star strikers and wannabe star strikers, that team’s going to underperform.
Thanks, I hadn’t seen this.
For more on this divide/points of disagreement, see Will MacAskill’s essay on the alignment forum (with responses from MIRI researchers and others)
and previously, Wolfgang Schwartz’s review of Functional Decision Theory
https://www.umsu.de/wo/2018/688
(with some Lesswrong discussion here: https://www.lesswrong.com/posts/BtN6My9bSvYrNw48h/open-thread-january-2019#WocbPJvTmZcA2sKR6)
I’d also be interested in Buck’s perspectives on this topic.
Speaking as one of the people associated with the project, I’d read or skimmed ‘upper bound’ (snyder-beattie), ‘vulnerable world’ (bostrom), ‘philosophical analysis’ (torres) and had been aware of ‘world destruction argument’ (knuttson).
Thanks Howie, but a quick note that this was an individual take by me rather than necessarily capturing the whole group; different people within the group will have work they feel is more impactful and important.
Updates on a few of the more forward-looking items mentioned in that comment.
A paper on AI and nukes is now out here: https://www.cser.ac.uk/resources/learning-climate-change-debate-avoid-polarisation-negative-emissions/
A draft/working paper on methodologies and evidence base for xrisk/gcr is here, with a few more in peer review: http://eprints.lse.ac.uk/89506/1/Beard_Existential-Risk-Assessments_Accepted.pdf
We’re in the process of re-running the biological engineering horizon-scan, as well as finishing up an ’80 questions for UK biosecurity’ expert elicitation process.
We successfully hired Natalie Jones, author of one of the pieces mentioned (https://www.sciencedirect.com/science/article/abs/pii/S0016328717301179), and she’ll do international law/governance and GCR work with us
[disclaimer: I am co-director of CSER] Another quick organisational update from CSER is that we are also recruiting for a number of positions attached to our major research strands—in AI safety; global population, sustainability and the environment; and responsible innovation and extreme technological risk. We’d be tremendously grateful for any help in sharing the word. Application deadline: August 26th. All details here:
Agree. Great work everyone who contributed.
Quick clarification—Phil Torres is an academic visitor with CSER for the term, rather than staff. But Bdixon is correct that Phil has published extensively and comprehensively on this. Thanks!
Thank you. Some specific info: Ellen Quigley joined as (part)-salaried at CSER in January 2019 (previously she was an external collaborator). The report was published in January 2019. It was conducted and mostly completed as part of a Judge Business School project in 2018. I was happy for CSER to co-brand as (a) it’s a good piece of work (b) being published by someone on staff (and where others provided some previous input) with (c) a well-thought out strategic aim, with good reasons to think it would be effective and timely in its aims from people with a lot of expertise in the topic (d) on a topic within our remit (climate/sustainability) and (e) offered various potential networking and reputational opportunities.
Since the report launch, Ellen has focused on other projects—the report has high value (by usual postdoctoral project standards) followup opportunities, but there are other projects of higher priority from a GCR/Xrisk perspective. Our current thinking is that if non-fungible-for Xrisk funding becomes available, Ellen may supervise a postdoc/research assistant in designing/actioning followups. Ellen has also accepted a more direct action-focused part-appointment (advising on the university of cambridge’s investment and shareholder engagement strategy around climate change (https://www.staff.admin.cam.ac.uk/general-news/two-environmental-appointments-at-the-university) so her research time is more limited.
More broadly, there are a lot of reasons why centres will sometimes engage in projects with indirect impacts or longer causal chains that don’t boil down to ‘failure to understand basic prioritisation for impact’. These include: 1) good intellectual or evidence-based reasons to have confidence that indirect approaches/longer causal chain-based approaches are likely to be effective, either in of themselves or as part of a suite of activities. (2) Value of these projects in establishing strong networks and credibility with bodies likely to be relevant for broader Xrisk mitigation (3) developing the ability and skillset to engage with the machinery of the world in different regards.
It will sometimes be affected by external constraints (e.g. funding stipulations—not every organisation has full funding from fully xrisk-aligned funders—or need for researchers to establish/maintain reputation and credibility in their ‘home domains’ in order to remain effective in the roles they play in Xrisk research). This is likely particularly true in academic institutions.
I would expect that with most xrisk organisations, particularly those with an active engagement with other research communities, policy bodies etc, there will be a suite of outputs where some are very obviously and directly relevant to xrisk, and where others are less direct or obvious but have good reason within an overall suite of activities.
My apologies in advance that I don’t have time to engage further due to other deadlines.
Agreed, thank you Justin. (I also hope I win the bet, and not for the money—while it is good to consider the possibility of the most severe plausible outcomes rigorously and soberly, it would be terrible if it came about in reality). Bet resolves 28 January 2021. (though if it’s within an order of magnitude of the win criterion, and there is uncertainty re: fatalities, I’m happy to reserve final decision for 2 further years until rigorous analysis done—e.g. see swine flu epidemiology studies which updated fatalities upwards significantly several years after the outbreak).
To anyone else reading. I’m happy to provide up to a £250 GBP stake against up to £50 of yours, if you want to take the same side as Justin.