Thanks John. With apologies for brevity: I don’t think I’d agree with such broad-strokes scepticism of EU scientific studies on environment, but this is a topic for a longer conversation. Ditto (4).
Re: 5, I don’t expect this to be the framing that Partha adopts in the review in question; rather I expect it will be in line with the kinds of analysis and framings he has adopted in his work in this space in the past years (on the basis of which he was chosen for this appointment). Thanks!
Re 5: To be honest, I doubt that his framing matters much. Whether it’s “influential person says Y should receive attention” or “influential person says Y should receive attention with a lot of caveats” it’s still a distraction if we think Y is not nearly as relevant as X.
I think this point to a wider issue about risk communication and advocacy: should the x-risk community:
1) advocate for many approaches to x-risk and be opportunistic in where policy-makers are responsive, or
2) advocate for addressing the biggest risks only and bullishly pursue only opportunities that address these biggest risks.
This seems to depend on ‘how widely is x-risk distributed over various risk factors?’ and different research organizations seem to hold different opinions. Is CSER’s view that x-risk is widely distributed or narrowly?
>Re 5: To be honest, I doubt that his framing matters much. Whether it’s “influential person says Y should receive attention” or “influential person says Y should receive attention with a lot of caveats” it’s still a distraction if we think Y is not nearly as relevant as X.
From my experience of engaging with policymakers on Xrisk/GCR, I disagree with this way of looking at things (and to an extent John’s related concerns). If Partha was directly pushing biodiversity loss as a direct existential risk to humanity needing policy action, without evidence for this, then yes I would have concerns about this. But that’s not what’s happening here. At most, some ‘potential worst case scenarios’ might be surfaced, and referred to centres like ours for further research to support or rule out.
A few points:
1) I think it’s wrong to view this as a zero sum game. There’s a huge, huge space for policymakers to care more about GCR, Xrisk, and the long-term future than they currently do. Working with them on a global risk-relevant topic they’re already planning to work on (biodiversity and economic growth), as Partha is doing, is not going to result in the space that could be taken up with Xrisk concerns being occupied.
2) What we have here is a leading scholar (with a background specifically in economics and in recent years, biodiversity/sustainability) working in a high-profile fashion on a global risk-relevant topic (biodiversity loss and economics), who also has strong links to an existential risk research centre. This establishes useful links; it demonstrates that scholars associated with existential risk (a flaky-seeming topic not so long ago, and still in some circles) are people who do good work and are useful and trustworthy for governments on risks already within their ‘attention’ overton window; it’s helpful for legitimacy and reputation of existential risk research (e.g. through these links, interactions, and reputable work on related topics, helping to nudge existential risks into the overton window of risks that policymakers take seriously and take active government action on.)
More broadly, and to your later points:
Working on these sorts of processes is also an effective way of understanding how governance and policy around major risk works, and developing the skillset and positioning needed to engage more effectively around other risks (e.g. existential).
We don’t know all the correct actions to take to prevent existential risks right now. In some cases (i) because the xrisks will come to light in future; (ii) in some cases because we know the problem but don’t yet know how to solve; (iii) in some cases because we have a sense of the solution but not a good enough sense of how to action. For all these things, doing some engagement in policy processes where we can work to mitigate global risks currently within the policy overton window can be useful.
I do think the Xrisk community needs ‘purists’, and there will be points at which the community will need to undertake a hard prioritisation action on a particular xrisk with government. But most within the community would agree it’s not the time with transformative AI; it’s not the time with nano; there’s disagreement over whether it is the time with nuclear. With bio, a productive approach is expanding the overton window of risks within current biosecurity and biosafety, which is made easier by being clearly competent and useful within these broader domains.
What it is time for is internally doing the research to develop answers. Externally and with policy communities, developing the expertise to engage with the mechanics of the world, the networks and reputation to be effective, embedding the foresight and risk-scanning/response mechanisms that will allow governments to be more responsive, and so forth. Some of that involves engaging with a wider range of global (but not necessarily existential) risk issues. (As well as other indirect work: e.g. the AI safety/policy community not just working on the control problem and the deployment problem, but also getting into position in a wide range of other ways that often involve broader processes or non-existential risk issues).
To your final question, my own individual view is that mitigating xrisk will involve a small number of big opportunities/actions at the right times, underpinned and made possible by a large number of smaller and more widely distributed ones.
Apologies that I’m now out of time for further engagement online due to other deadlines.